duminică, 19 martie 2017

Quora questions I've seen enough of


I actually do like Quora (you could have seen my SadQuora tweets, a facet impact of the time I spend there). However when any person requested, "What are probably the most annoying forms of questions on Quora?" I could not resist. Possibly it is simply my feed, however I see issues like these loads:


  • I’m 23 years outdated, am I too outdated to be taught programming?

  • If a self-driving automobile needed to both hit and kill a cow within the left lane, or hit and kill a horse in the proper lane, which wouldn't it select?

  • I made this, do you prefer it?

  • Who would win in a combat between seven grownup tigers and a Humvee filled with Navy SEALs?

  • What's the finest programming language to make use of if I need to make a website to compete with Fb?

  • What cellphone is finest for graphic design?

  • What's the technique of constructing a jet airplane from scratch? Please be particular.

  • Why do all my questions get marked as needing enchancment?

  • My laptop is appearing humorous, how do I repair it?

  • Is laptop programming going to be out of date in 5 years as a result of all of the computer systems will program themselves?

  • My girlfriend mentioned she didn’t need to see me anymore and moved to a different nation and married another man and adjusted her identify. Would it not be romantic to trace her down?

  • What's the cutest image of your cat and what's a narrative about it?

sâmbătă, 4 martie 2017

Exploring the new DevOps - Azure Command Line Interface 2.0 (CLI)


Azure CLI 2.0I am an enormous fan of the command line, and generally I really feel like Home windows individuals are missing out on the power of text mode. Thankfully, in the present day Home windows 10 has bash (via Ubuntu on Windows 10), PowerShell, and "basic" CMD. I exploit all three, myself.

Five years ago I began managing my Azure cloud internet apps utilizing the Azure CLI. I have been an enormous fan of it ever since. It was written in node.js, it labored the identical all over the place, and it obtained the job achieved.

Quick ahead to in the present day and the Azure workforce simply announced a complete Azure CLI re-write, and now 2.0 is out, today. Initially I used to be involved it had been re-written and did not perceive the philosophy behind it. However I perceive it now. Whereas it really works on Home windows (my every day driver) it is architecturally aligned with Mac and (largely, IMHO) Linux customers. It additionally helps new considering round a contemporary command line with assist for issues like JMESPath, a question language for JSON. It really works properly and clearly with the same old suspects after all, like grep, jq, minimize, and many others. It is simply installed with pip, otherwise you simply get Python Three.5.x after which simply "pip set up --user azure-cli."

Linux folks (be at liberty to examine the script) can simply do that curl, however it's also in apt-get, after all.

curl -L https://aka.ms/InstallAzureCli | bash

NOTE: Since I have already got the older Azure CLI 1.zero on my machine, it is helpful to notice that these two CLIs can stay on the identical machine. The brand new one is "az" and the older is "azure," so no issues there.

Or, for these of you who run particular person Docker containers to your instruments (or should you're simply desirous to discover) you'll be able to

docker run -v $:/root -it azuresdk/azure-cli-python:

Then I simply "az login" and I am off! Right here I will question my subscriptions:

C:UsersscottDesktop> az account record --output desk

Title CloudName Sub State IsDefault

------------------------------------------- ----------- --- ------- -----------

Three-Month Free Trial AzureCloud 0f3 Enabled

Pay-As-You-Go AzureCloud 34c Enabled

Home windows Azure MSDN AzureCloud ffb Enabled True

At this level, it is already feeling acquainted. It is "az noun verb" and there is an non-compulsory --output parameter. If I do not embody --output by default I will get JSON...which I can then question with JMESPath if I might like. (These of us who're older could also be having somewhat XML/XPath/XQuery déjà vu)


I can use JSON, TSV, tables, and even "colorized json" or JSONC.

C:UsersscottDesktop> az appservice plan record --output desk 

AppServicePlanName GeoRegion Type Location Standing

-------------------- ---------------- ------ ---------------- --------

Default1 North Central US app North Central US Prepared

Default1 Southeast Asia app Southeast Asia Prepared

Default1 West Europe app West Europe Prepared

DefaultServerFarm West US app West US Prepared

myEchoHostingPlan North Central US app North Central US Prepared

I could make and handle mainly something. Right here I will make a brand new App Service Plan and put two internet apps in it, all managed in a gaggle:

az group create -n MyResourceGroup
# Create an Azure AppService that we will use to host a number of internet apps

az appservice plan create -n MyAppServicePlan -g MyResourceGroup


# Create two internet apps inside the appservice (observe: title param have to be a singular DNS entry)

az appservice internet create -n MyWebApp43432 -g MyResourceGroup --plan MyAppServicePlan

az appservice internet create -n MyWEbApp43433 -g MyResourceGroup --plan MyAppServicePlan

You is perhaps considering this appears to be like like PowerShell. Why not use PowerShell? Bear in mind this is not for Home windows primarily. There is a ton of DevOps taking place in Python on Linux/Mac and this matches very properly into that. For these of us (myself included) who're PowerShell followers, PowerShell has massive and complete Azure Support. After all, whereas the bash people might want to use JMESPath to simulate passing objects round, PowerShell can carry on preserving on. There is a command line for everybody.

It’s straightforward to get began with the CLI at http://aka.ms/CLI and be taught in regards to the command line with docs and samples. Take a look at matters like installing and updating the CLI, working with Virtual Machines, making a complete Linux environment together with VMs, Scale Units, Storage, and community, and deploying Azure Web Apps – and allow them to know what you suppose at azfeedback@microsoft.com. Additionally, as all the time, the Azure CLI 2.zero is open supply and on GitHub.


Sponsor: Take a look at JetBrains Rider: a new cross-platform .NET IDE. Edit, refactor, check, construct and debug ASP.NET, .NET Framework, .NET Core, or Unity functions. Learn more and get access to early builds!




© 2016 Scott Hanselman. All rights reserved.

     

luni, 27 februarie 2017

The Riak key-value database: I like it


(Note: This is a writeup I did a few years ago when evaluating Riak KV as a possible data store for a high-traffic CMS. At the time, the product was called simply "Riak". Apologies for anything else that has become out of date that I missed. Also please pardon the stiff tone! My audience included execs who we wanted to convince to finance our mad scientist data architecture ideas.)


Riak is a horizontally scalable, fault-tolerant, distributed, key/value store. It is written in Erlang; the Erlang runtime is its only dependency. It is open source but supported by a commercial company, Basho.


Its design is based on an Amazon creation called Dynamo, which is described in a 200-page paper published by Amazon. The engineers at Basho used this paper to guide the design of Riak.


The scalability and fault-tolerance derive from the fact that all Riak nodes are full peers -- there are no "primary" or "replica" nodes. If a node goes down, its data is already on other nodes, and the distributed hashing system will take care of populating any fresh node added to the cluster (whether it is replacing a dead one or being added to improve capacity).


In terms of Brewer's "CAP theorem," Riak sacrifices immediate consistency in favor of the two other factors: availability, and robustness in the face of network partition (i.e. servers becoming unavailable). Riak promises "eventual consistency" across all nodes for data writes. Its "vector clocks" feature stores metadata that tracks modifications to values, to help deal with transient situations where different nodes have different values for a particular key.


Riak's "Active Anti-Entropy" feature repairs corrupted data in the background (originally this was only done during reads, or via a manual repair command).


Queries that need to do more than simple key/value mapping can use Riak's MapReduce implementation. Query functions can be written in Erlang, or Javascript (running on SpiderMonkey). The "map" step execution is distributed, running on nodes holding the needed data -- maximizing parallelism and minimizing data transfer overhead. The "reduce" step is executed on a single node, the one where the job was invoked.


There is also a "Riak Search" engine that can be run on top of the basic Riak key/value store, providing fulltext searching (with the option of a Solr-like interface) while being simpler to use than MapReduce.


Technical details


Riak groups keys in namespaces called "buckets" (which are logical, rather than being tied to particular storage locations).


Riak has a first-class HTTP/REST API. It also has officially supported client libraries for Python, Java, Erlang, Ruby, and PHP, and unofficial libraries for C/C++ and Javascript. There is also a protocol buffers API.


Riak distributes keys to nodes in the database cluster using a technique called "consistent hashing," which prevents the need for wholesale data reshuffling when a node is added or removed from the cluster. This technique is more or less inherent to Dynamo-style distributed storage. It is also reportedly used by BitTorrent, Last.fm, and Akamai, among others.


Riak offers some tunable parameters for consistency and availability. E.g. you can say that when you read, you want a certain number of nodes to return matching values to confirm. These can even be varied per request if needed.


Riak's default storage backend is "Bitcask." This does not seem to be something that many users feel the need to change. One operational note related to Bitcask is that it can consume a lot of open file handles. For that reason Basho advises increasing the ulimit on machines running Riak.


Another storage backend is "LevelDB," similar to Google's BigTable. Its major selling point versus Bitcask seems to be that while Bitcask keeps all keys in memory at all times, LevelDB doesn't need to. My guess based on our existing corpus of data is that this limitation of Bitcask is unlikely to be a problem.


Running Riak nodes can be accessed directly via the riak attach command, which drops you into an Erlang shell for that node.


Bob Ippolito of Mochi Media says: "When you choose an eventually consistent data store you're prioritizing availability and partition tolerance over consistency, but this doesn't mean your application has to be inconsistent. What it does mean is that you have to move your conflict resolution from writes to reads. Riak does almost all of the hard work for you..." The implication here is that our API implementation may include some code that ensures consistency at read time.


Operation


Riak is controlled primarily by two command-line tools, riak and riak-admin.


The riak tool is used to start or stop Riak nodes.


The riak-admin tool controls running nodes. It is used to create node clusters from running nodes, and to inspect the state of running clusters. It also offers backup and restore commands.


If a node dies, a process called "hinted handoff" kicks in. This takes care of redistributing data -- as needed, not en masse -- to other nodes in the cluster. Later, if the dead node is replaced, hinted handoff also guides updates to that node's data, catching it up with writes that happened while it was offline.


Individual Riak nodes can be backed up while running (via standard utilities like cp, tar, or rsync), thanks to the append-only nature of the Bitcask data store. There is also a whole-cluster backup utility, but if this is run while the cluster is live there is of course risk that some writes that happen during the backup will be missed.


Riak upgrades can be deployed in a rolling fashion without taking down the cluster. Different versions of Riak will interoperate as you upgrade individual nodes.


Part of Basho's business is "Riak Enterprise," a hosted Riak solution. It includes multi-datacenter replication, 24x7 support, and various services for planning, installation, and deployment. Cost is $4,000 - $6,000 per node depending how many you buy.


Overall, low operations overhead seems to be a hallmark of Riak. This is both in day-to-day use and during scaling.


Suitability for use with our CMS


One of our goals is "store structured data, not presentation." Riak fits well with this in that the stored values can be of any type -- plain text, JSON, image data, BLOBs of any sort. Via the HTTP API, Content-Type headers can help API clients know what they're getting.


If we decide we need to have Django talk to Riak directly, there is an existing "django-riak-engine" project we could take advantage of.


TastyPie, which powers our API, does not actually depend on the Django ORM. The TastyPie documentation actually features an example using Riak as data store.


The availability of client libraries for many popular languages could be advantageous, both for leveraging developer talent and for integrating with other parts of the stack.


Final thoughts


I am very impressed with Riak. It seems like an excellent choice for a data store for the CMS. It promises the performance needed for our consistently heavy traffic. It's well established, so in using it we wouldn't be dangerously out on the bleeding edge. It looks like it would be enjoyable to develop with, especially using the HTTP API. The low operations overhead is very appealing. And finally, it offers flexibility, scalability, and power that we will want and need for future projects.

joi, 5 ianuarie 2017


Spune stop grijilor. Stop datoriilor interminabile. Stop problemelor cauzate de lipsa banilor. Stop stresului. Stop incapacitatii de a-ti cumpara ce-ti pofteste inima.
La Selen Studio dam unda verde catre ceea ce iti doresti atata timp cand esti o persoana capabila, muncitoare si comunicativa.
Suna-ne si stabileste un interviu. Te asteptam.
0734 959 924/ 0760 179 243
cazare videochat