This is the Alagad blog, AKA The Alagad Ally.
Today’s Office Hours: Setting Up a Local ColdFusion Development Environment
I’m back again this week with another installment of Office Hours. Last week’s Office Hours went well and we all learned a bit about Git. This week Phillip Senn has asked to go over how to setup a local development environment for ColdFusion.
The show starts here at 3:30 pm today, Thursday November 8th. I’ll post a link to the Google+ Hangout and a link to the streaming video in this post around 3:15. I hope to see you this afternoon.
Join the hangout or watch the YouTube stream:
Office Hours: Git
I announced on Wednesday that I was going to start holding Office Hours as an open forum where I can help someone work through a problem they’re having. This week I’ll be working with Phillip Cenn to get him started with Git. Our session begins at 3:00PM ET. Up to eight people can join into our Google+ Hangout. Everyone else is welcome to watch the life stream below.
Google+ Hangout: https://plus.google.com/hangouts/_/20d73fe568f5f674d1d7370072e2a2358ae3af26?authuser=0&hl=en
And this is where the live video stream will be:
Office Hours -or- I’m Unafraid of Making a Fool of Myself in Public
A couple of weeks ago I was honored to speak at the very excellent NCDevCon. While there I had a very interesting conversation with a guy by the name of Phillip Senn. Phillip expressed that he was looking for a mentor, someone who would push him towards better development practices and help him get unstuck when he encountered a problem he couldn’t get past. For example, he needs a bit of a push to get started with Git. He’d like to see how people setup their development environments, etc.
While I don’t really have the time or inclination to be a traditional mentor, the request got me thinking. I certainly don’t know everything there is to know about programming, but over the years I’ve gotten really good at learning new things quickly. I feel comfortable being thrown (or throwing myself) into new technical situations. I recognize patterns and can Google with the best of ’em.
So, Phillip and I conceived of what he dubbed “Office Hours”. The plan (at least for now) is for me to have a scheduled time to do a screencast where I help someone get started with or learn something new for free. I will be doing this in a Google+ hangout and streaming it live to my YouTube channel. I also plan to make archived sessions available online.
The first Office Hours will be this Friday, November 2nd from 3 to 5pm EST. Right before the office hours is set to begin I’ll publish the Office Hours Google+ Hangout URL, the live stream video, and any other relevant information. Those who are interested (up to 8 additional people) can join in the hangout and help out or follow along. Anyone else will be able to watch the video in real time.
For this first I will be working with Phillip to help him get started with Git. Truth be told, while I use Git, I’m no Git guru. My intention, however, is to help him get it installed, create a new repo, commit code, branch, merge, etc. Basically, to give him a tour of as much of it as I can. I imagine this will be rather organic with a few false starts and dead ends before we really get anywhere. To me, part of what makes Office Hours interesting is that I’ll be learning while I do it too.
I want to make Office Hours a weekly event. I also want you to feel free to email me with your requests and suggestions. For example, perhaps you want help getting started with Node.js (or anything else). Just send me an email and, if I choose your topic, we’ll schedule a time for your Office Hours. I would request that you not limit your questions to what you expect me to know. If the topic is new to me I’ll work ahead and be ready to hit the ground running in our Office Hours session.
So, do you have any thoughts or suggestions? I’d love your feedback on the concept.
Help! My ColdFusion linux system just got hacked!
The other day, I had a client contact me with an issue: His Linux ColdFusion server had been hacked, and his web site defaced. This was a CentOS system, and they were running ColdFusion and PHP on Apache for a number of virtual hosts.
After SSH’ing into the system, the 1st thing I wanted to do was make sure the hacker was not still connected to the system. At a command line, enter ‘w’ and press enter. You will see something like this:
[root@host user]# w 07:39:00 up 15:57, 1 user, load average: 0.04, 0.07, 0.09 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT user pts/0 00-000-000-000.l 07:38 0.00s 0.02s 0.01s sshd: user [priv]
The output above shows that only one user is connected (myself), but if you had someone SSH’d into your system, you would see multiple lines here, as well as their source IP address.
At first my approach was to review system logs. This server was hosting many many sites, some of the log files were in the 2 GB range, so easily getting log idea’s from the console was less than ideal. ( I know that there are great tools like grep and awk and what not, I am just not a wizard with those tools ). I zipped up the apache log files and downloaded them locally, then wrote a quick ColdFusion script to parse the important bits into SQL.
<cfloop file="/Users/justice/Downloads/access_log" index="line"> <cfset templine = replace(line, " """, ",", "ALL") /> <cfset templine = replace(templine, " -", ", ", "ALL") /> <cfset templine = replace(templine, """ ", ",", "ALL") /> <cfset templine = replace(templine, " [", ",[", "ALL") /> <cfset templine = replace(templine, "] ", "],", "ALL") /> <cfset k = 1 /> <cfset record = structNew() /> <cfloop list="#templine#" delimiters="," index="i"> <cfswitch expression="#k#" > <cfcase value="1"> <cfset record.ip = replace(replace(i, '-', '', 'ALL'), ' ', '', 'ALL') /> </cfcase> <cfcase value="2"> <cfset record.date = replace( replace(i, '[', '', 'ALL'), ']', '', 'ALL') /> </cfcase> <cfcase value="3"> <cfset record.url = right(i, len(i)-3) /> </cfcase> <cfcase value="5"> <cfset record.refer = i /> </cfcase> </cfswitch> <cfset k++ /> </cfloop> <!---// Insert record into the database //---> <cfquery name="insert" datasource="localTest"> INSERT INTO apacheLogs( ip, reqdate, url, refer ) VALUES ( <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#record.ip#" />, <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#record.date#" />, <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#record.url#" />, <cfqueryparam cfsqltype="CF_SQL_VARCHAR" value="#record.refer#" /> ) </cfquery> </cfloop>
This database was invaluable in recent request analysis, but it was still *massive*, and was very tough to isolate any one thing that looked suspicious. Back on the linux server, I used a command to look at recently changed files within the web root(s):
find . -mtime -1 -print
This command showed me any files in a subfolder of my current location, that had been edited in the last 1 day. I scanned down the list to find anything suspicious. One thing that stood out was a set of files owned by the apache // apache user, one called ss.php I switched back to my MS SQL query editor, and searched through the requests for any going to ss.php. One suspicious IP address made multiple requests to this file, and a quick lookup on http://dnstools.com/ showed that this was coming from Saudia Arabia (most likely a compromised host there, bouncing through from another location).
After finding this, I queried my imported apache database for all requests from that IP. This showed me several requests to a PHP admin tool that one of the customers loaded into their web site, with a referrer of google. Yahh, now I know the entry point to this server – a PHP admin tool with no authentication on it, found by a random user doing a google search for just such a windfall. This IP query also showed all url parameters, so I was able to review each thing this hacker did on the system, and remove all effected files one by one.
Of course, this has led us to further securing the server, restricting PHP access, locking down SSH access, and more. I hope this general strategy helps someone figure out what is going on when they get hacked!
Doug @ NCDevCon – Javascript: things you never knew you didn't know
I was pleased to find out that I’ve been invited to speak at at NCDevCon again this year. NCDevCon is a ColdFusion and web development focused conference event held annually at the Centennial Campus of NC State University in Raleigh, North Carolina. The even is put on by the fine folks at the Triangle ColdFusion User Group, aka. TACFUG.
Here are the details of my session:
Javascript: things you never knew you didn’t know.
So, you think you know JavaScript? I think not! There are a ton of small features hidden under the covers that many developers either don’t know about or don’t know how to use. This topic will go over an ad-hoc list JavaScript related goodies that I’ve picked up over the last year or so, including typed arrays, accessors, array folding, object inheritance, various tips and tricks, and more. Many of the topics relate to newer revisions of JavaScript and may not work in older browsers.
I hope to see you there!
Also, thanks again to TACFUG for inviting me to speak again! You guys are awesome.
Unborking VPN on OS X
For those of you using the inbuilt VPN features on OS X, you may have noticed that from time to time it will stop wanting to connect. For me, pretty much any time I disconnect from VPN, the next time I try to connect I will get an unfriendly message that looks like this:
This happens to me way too often. It can happen when trying to connect, it can happen after disconnecting and then reconnecting, it can happen without any apparent provocation.
In the past it seemed like the only option was to completely restart OS X. As you can imagine, this is not an acceptable solution to someone who keeps a lot of apps open, needs to use VPN frequently, and who doesn’t wish to waste time rebooting for no good reason.
Thankfully, Joe Bernard was able to track down the solution and was kind enough to share it with me.
Apparently there’s a process in OS X called “racoon”. Racoon is in charge of VPN connections. Here’s what Apple’s man pages have to say about it:
racoon is used to setup and maintain an IPSec tunnel or transport channel, between two devices, over which network traffic is conveyed securely. This security is made possible by cryptographic keys and operations on both devices. racoon relies on a standardized network protocol (IKE) to automatically negotiate and manage the cryptographic keys (e.g. security associations) that are necessary for the IPSec tunnel or transport channel to function. racoon speaks the IKE (ISAKMP/Oakley) key management protocol, to establish security associations with other hosts. The SPD (Security Policy Database) in the kernel usually triggers racoon. racoon usually sends all informational messages, warnings and error messages to syslogd(8) with the facility LOG_DAEMON and the priority LOG_INFO. Debugging messages are sent with the priority LOG_DEBUG. You should configure syslog.conf(5) appropriately to see these messages.
In a nutshell, Racoon gets borked. Sometimes this means that the racoon process needs to be restarted, but in my experience 99% of the time it means that it’s not actually running.
So, you can restart racoon from the terminal like so:
sudo /usr/sbin/racoon
I’ve also found that sometimes you need to restart the various networking interfaces you’re using. Because of this, I ended up writing a shell script I call fixnetwork.sh:
sudo ifconfig en0 down sudo ifconfig en1 down sudo ifconfig en0 up sudo ifconfig en1 up sudo /usr/sbin/racoon
I put this in my home directory, set it to be executable, and can run it like so:
~/fixnetwork.sh
Works like a charm for me. No more reboots to fix borked VPN connections! Productivity, here I come!
Recovering a frozen Amazon EC2 instance
So, the Alagad web server is one of the 1st EC2 servers I configured. Windows 2003, small instance, with instance level storage for the boot device. For new servers, I would *always* advise using an EBS partition for your boot device, it makes backing up and restoring 100% easier. Rebuilding this server is on my to-do list, but I digress…
Friday, this server decided to stop responding. In the EC2 control panel, it showed as ‘Up’, but failing health checks. The 1st thing I always do, is attempt to reboot it (have you tried turning it off and on again?). It appeared to be rebooting, but still would fail contact health checks and we were unable to RDP to the box. Sometimes a reboot just takes time, but in this case several hours later we were still stuck. I submitted a ticket to Amazon support, where was was told:
Hello,
Your instance is currently failing the System Reachability status check. Given that your root device type is the instance store, you will need to terminate the instance and launch a replacement.
If it is possible for your situation, I recommend launching the replacement using an EBS volume as your root device type. When an EBS-backed instance fail system status checks, you can often resolve failed System status checks by stopping and re-starting the instance rather than replacing it.
You can find more information here about
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/TroubleshootingInstances.html
This was, obviously, not what I wanted to hear! I dug into the storage attached to the instance, and sure enough the S3 drive showed that it was failing a reliability check. I forced a dis-connect of that storage device from my instance, then clicked in the interface to make the device available anyway. With this drive disconnected, I was able to re-boot my instance (cheers!). Once windows was back up and running and I had RDP access, I made a snapshot of my storage device, attached the snapshot to windows, and re-activated the volume in windows storage manager.
Does this mean the server is going to stay as it is? Not at all – this still needs to be rebuilt, using an EBS root device, but at least my hand is not forced to do the rebuild on Friday at 4:00 PM!
Cheers
CoolBeans – an IOC container for Node.js
I just finished writing my first publicly available module for Node.js. If you’re not familiar with Node.js, well, I’m just sorry to hear that. Go learn.
The module I wrote is called CoolBeans. (Thanks for the name, Mr. Chris Peterson). CoolBeans is an Inversion of Control (IOC) / Dependency Injection (DI) library for Node.js. CoolBeans is loosely based on ColdSpring for ColdFusion and Spring IOC for Java. It’s a single js file and currently appears to be quick and easy.
To install:
npm install CoolBeans
To use CoolBeans you simply create an instance of the CoolBeans and load the configuration file like this:
var cb = require("CoolBeans"); cb = new cb("./config/dev.json");
The above code is the only require function you should need in your entire application. Once you’ve required CoolBeans you need to create a new instance and pass in the path to its’ configuration. This is shown above.
Once you have the fully loaded CoolBeans you can use it to quickly create fully configured singleton objects based on its’ configuration. The config file for CoolBeans is a JSON file so the entire thing is wrapped in {}.
Each element in the root of the configuration file is a bean (bean = Java for object) that CoolBeans can create. Here’s an example:
{ "fs": {"module": "fs"} }
This is essentially the same as:
var fs = require("fs");
However, we now only need to define this one time for an application, rather than in each file that requires it.
You can also specify paths to modules that are not node_modules. For example:
{ "Recipient": {"module": "./entities/recipient"} }
At the most basic, the above means that CoolBeans will call require for the module and cache the results in a variable named Recipient.
As a relative newb to Node.js, I think I’ve handled this correctly. CoolBeans is a node module which means that NPM will install into ./node_modues/CoolBeans. The actual CoolBeans script is in the lib directory. That means, that from the perspective of CoolBeans your components are three directories above it. For this reason, CoolBeans looks three directories above it for the module specified. So, the Recipient module above actually turns into ../../.././entities/recipient. This has the effect of making the paths to modules specified in the configuration file relative to the root of your module or application. So, if you make a module that depends on CoolBeans and later publish it via NPM I think it should work correctly when used in other projects.
You can get any of the configured beans by calling cb.get(“beanName”) where beanName is the name of the bean you want to get. For example:
cb.get("Recipient");
The above will lazily create the Recipient bean, cache it as a singleton, and return it.
You can get a lot more complex with configuration too. For example, you can specify if CoolBeans should call a constructor and what arguments to pass into the constructor. For example:
"codeGenerator": { "module": "./util/codeGenerator", "constructorArgs": [ "foo", 123 ] }
What the above is saying is that when we get the codeGenerator bean, we need to load the module specified then call new on the module and pass in the values specified in the constructorArgs section to the constructor. The above means:
new codeGenerator("foo", 123);
You can also specify more complex values to pass into constructor arguments:
"codeGenerator": { "module": "./util/codeGenerator", "constructorArgs": [ "foo", 123, {"value": [1, 2, 3]}, {"value": { "foo": "bar", "bar": "foo" } } ] }
Note that while you can specify a string without indicating explicitly it’s a “value”, for arrays and anonymous objects you need to provide an object with a property named “value” whose value is the value you’re trying to pass in. The above could be written more explicitly as:
"codeGenerator": { "module": "./util/codeGenerator", "constructorArgs": [ {"value": "foo"}, {"value": 123}, {"value": [1, 2, 3]}, {"value": { "foo": "bar", "bar": "foo" } } ] }
Strings, numbers, arrays, and anonymous objects are not the only things you can pass into constructors. You can also specify other beans that could be passed in. For example, let’s say we had a database configuration object you want to pass into any object that is used to access data you could do the following:
"mysql": {"module": "mysql"}, "dbConfig": { "properties": { "host": "server.hostname.com", "port": 3306, "user": "mysqlUser", "password": "password123", "database": "foobar" } }, "recipientDao": { "module": "./db/recipientDao", "constructorArgs": [ {"bean": "dbConfig"}, {"bean": "mysql"} ] }
The mysql bean is simply the same as saying require(“mysql”). The dbConfig is an anonymous object with properties specified (more on this in a bit). When the recipientDao (dao = data access object) is created, CoolBeans will see the “bean” property and will create and pass into the constructor the fully-constructed dbConfig object and the mysql object. Here’s what that recipientDao might look like:
module.exports = function(dbConfig, mysql){ this.listRecipients = function(userId, callback){ var client = mysql.createClient(dbConfig); client.query( "SELECT id, name, addressLine1, IfNull(addressLine2, '') as addressLine2, city, state, zip, taxDeductible, created, updated, 0 as netDonations " + "FROM recipient " + "WHERE userId = ? AND deleted = 0 "+ "ORDER BY name", [userId], function(err, results, fields){ client.end(); callback(results); }); } }
Note that there are no require statements. The object just gets its’ dependencies when it’s instantiated and can immediately use them. These dependencies are also automatically singletons.
Also note that if you want to use a transient object you would still create an instance of it the way you always have.
I also mentioned above that CoolBeans can be used to create create and populate anonymous objects. For example:
"dbConfig": { "properties": { "host": "server.hostname.com", "port": 3306, "user": "mysqlUser", "password": "password123", "database": "foobar" } }
This is a somewhat long-winded way of saying
dbConfig = { "host": "server.hostname.com", "port": 3306, "user": "mysqlUser", "password": "password123", "database": "foobar" };
However, once this object is configured in CoolBeans you can easily pass it into other objects when they are created.
You can also specify properties for not-anonymous objects. You can also mix and match constructorArgs and properties.
For example:
"creditCardDao": { "module": "./db/creditCardDao", "constructorArgs": [ {"bean": "dbConfig"}, {"bean": "authorize"}, {"bean": "mysql"}, {"bean": "CreditCard"} ], "properties": { "service": {"bean": "service"} } }
When CoolBeans creates the creditCardDao it will first load all the beans specified in the constructorArgs. It will then create the creditCardDao and pass in the four already-created beans to the constructor. Once the object is constructed it will set the service property on the object to the specified service bean. Note, CoolBeans will look for a setter and use that if it can find it. For example, in the service property above, ColdBooks will first look for a function named setservice (note that this is case sensitive). If it can find it, it will pass in the service bean to that function. If not, it will simply set a public property on the object.
There are a few other interesting capabilities of CoolBeans:
Beans don’t have to be lazily loaded. You can set a bean to load when the container loads. For example:
"dateFormat": { "module": "./util/dateFormat", "lazy": false }
Also, if you have a factory that is used to construct other objects, you can specify this using the factoryBean and factoryMethod properties. For example:
"knox": {"module": "knox"}, "s3client": { "factoryBean": "knox", "factoryMethod": "createClient", "constructorArgs": [ {"value": { "key": "myKey", "secret": "mySecret", "bucket": "myBucket" } } ] }
The above s3client bean is configured so CoolBeans uses Knox to create it. The constructor args are passed into the factoryMethod as if it were a constructor. The above essentially boils down to:
s3client = knox.createClient({ "key": "myKey", "secret": "mySecret", "bucket": "myBucket" });
The really nice thing about CoolBeans is that it lets the objects in your system stay focused on what they do best. It shouldn’t be your object’s responsibility to know what they need to work. They should simply get what they need to work when they’re created. CoolBeans also helps avoid situations in complex apps where you have dozens of lines of code just getting dependencies created just to create one object that otherwise happens to have a lot of dependencies. Lastly, CoolBeans allows you to easily change how your application is configured in different environments.
If you’re a Node.js developer I’d love to hear your thoughts on CoolBeans! Heck, I’d love to hear your thoughts even if you’re not.
Starting a new project using node.js
So this week I am starting a ‘super secret’ application, and for various reasons we have decided to develop this with node.js. In no particular order, here are some of my impressions and thoughts as I get my feet wet with node.js and some of its various frameworks.
First of all, if you have not used node before, know this: node is FAST! I mean, blow your socks off fast. I’m used to a J2EE app running in a JRun / Tomcat style context, and waiting for the various application frameworks and entities to get loaded up usually takes some time. Well, when you type ‘node app.js’, the app is running, instantly. Making requests to the page from your browser, they load *instantly*. Stop / restart the server, seems only limited by how fast you can type. I was curious and made some simple page returns from node, and ran a load tester against it. With no delay in my jMeter test, on my local development system, I was retrieving the root page of this node app at approx. 180 page requests / second, regardless of how many simultaneous users I sent in.
Now, not having worked with node before, I spent some time gathering resources online. One of my favorite so far is http://howtonode.org/. We have decided to use Express, an application development framework for node. You can get an express intro http://www.screenr.com/elL, and you can install using npm (the node package manager) with ‘npm install express’. Get more information about express from https://github.com/visionmedia/express/blob/master/Readme.md.
My favorite part so far for node (besides the insane speed and low server resource utilization) is that I am leveraging existing knowledge. I have been working with javascript many years, so in a lot of ways I feel like I’m just learning a new framework, rather than an entire new language. I like the level of control that I have over the request, and am enjoying using a whole new stack for app development (git, node, textmate as IDE). Watch this 2nd screencast about using route specific middleware http://www.screenr.com/mAL, very exciting the capabilities of this framework, and I look forward to learning more as this project develops!
Soliciting Stack Suggestions
I am part owner of another web company, which shall remain unnamed. A few years ago I (and a couple others) wrote their current web application. Initially the application was sufficient, but over time we’ve run into some limitations of the application’s architecture.
For example, this was written before I realized that writing fat controllers was a bad idea. In fact, this is such an early project that it ran on a pre-1.0 version of Model-Glue. There was no concept of ColdSpring at all. It used a stand-alone tool to generate the data access layer (which eventually inspired me to write Reactor, which is now also defunct).
Furthermore, the business requirements have changed over time. The site was initially built to satisfy one specific use case: selling a given product in one specific way.
Overtime the business team has requested new features that are difficult or impractical to implement in the existing architecture. For example, they want to be able to edit and add content on the site without involving me or any other technical staff. They also want to start selling other products in slightly different ways from their original plans.
Long story short, it’s time for a rewrite. I now need to choose a stack of frameworks, languages, etc, that are appropriate for this project. I’m afraid that I’m over complicating things and was hoping that the general Internet might have some suggestions.
So here’s what I can say I think I need:
- I need a Content Management system that allows the business and marketing team to add pages to the site, edit content, and generally manage the Information Architecture of the site. They need this because I am not as responsive to their requests as they would like. I do have Alagad to run on a day-to-day basis.
- I need to write custom code that integrates with the CMS to support the unique business requirements of this application. This would include management of the product being sold, reporting, integration with other third party systems, and more. There really are not any off the shelf tools that I could use to replace this custom work. I really want to write this code to leverage IOC and OO. One of the biggest challenges we’ve had with the existing site is that it is mostly pseudo-OO, which makes changes and enhancements more difficult to implement. Hence, the rewrite we need to do!
For a content-heavy site I’d typically use an off-the-shelf CMS like Mura or Farcry. For a custom-code-heavy site I’d typically use Model-Glue and ColdSpring to help me structure clean maintainable code.
The problem comes in when I try to do both: Use a CMS and write clean maintainable code.
My experiences (though dated) with Farcry pretty much suggest that you have to write code Farcry’s way if you want to do anything custom. I honestly just don’t like Farcry’s approach.
I’ve also worked with Mura quite a bit over the last year on a different project. For that, we used Mura as the CMS and wrote a Mura plugin to run Model-Glue events based on how a content element is configured in Mura. It’s my opinion that this was a reasonably good stack.
I’ve even considered writing my own CMS that integrates nicely with Model-Glue, but that’s really a non-starter.
For this rewrite I have been planning to use the following stack:
- Mura for content Management.
- ColdSpring, Model-Glue and Hibernate for custom code that will be run within Mura via a Mura plugin.
- JQuery
- This would be run on Railo, which would run in Tomcat.
- Eventually this would be deployed as a WAR to Amazon Elastic Beanstalk or another Java Platform-As-A-Service provider.
And while that seems like a reasonable stack to me, I have a few concerns:
- Overall performance: The Mura + Model-Glue project I mentioned earlier has been suffering general performance woes. Many of these could be attributed to the sheer volume of custom code and the fact that it is not running the latest updates to Mura and Model-Glue. These have mostly been ironed out, but it’s still an area of concern to me.
- I might be crazy: To me, the stack above makes sense. However, every other developer who has worked with me on this stack doesn’t like it. At all. Some developers dislike Mura. Some developers dislike the Model-Glue integration. Some developers think the stack is too deep. Some dislike using unfamiliar platforms like Tomcat and Amazon EBS. Lastly, very few developers have experience with the complete stack (even when you ignore Tomcat and Amazon).
I know I could build this system using the stack I’ve outlined above and I’m reasonably sure I can get it to perform well. However, I feel like I’m not seeing something that everyone else sees. I’ve even had three contract developers quit in the early phases of this project.
I’m not sure if my stack should change. Thinking pragmatically, can you think of an alternative approach to this problem? One that lets the business and marketing teams have the flexibility they need and also allows developers to write clean, well structured code, using the latest best practices?
I’m open to all suggestions. I’ve been considering other languages and platforms. However, I also need to keep in mind my learning curve on any other tools or languages that might be used as well as unique business requirements that may make some choices impractical.
I’d greatly appreciate your suggestions. What are your thoughts?