kodekitchen-blog
KodeKitchen
24 posts
Development Recipes
Don't wanna be here? Send us removal request.
kodekitchen-blog 11 years ago
Text
A sip from Flask
Lately I came to find Django a bit top heavy for one of my projects, so I chose Flask as a lighter and smaller alternative. After fiddling with the tutorials for a bit I wanted to have a setup with several modules. Suprisingly that wasn't as easy to do as the snippets and examples showed several options and configurations and... So, this is what worked for me. May not be the true gospel but I wanted modules to be set to certain urls like mounted apps in padrino.
This is what I came up with:
+ Project -- start.py + module1 -- __init__.py -- app.py + module2 -- __init__.py -- app.py
So module1 and 2 are two functional units which should answer to specific prefixes (localhost:5000/module1 and localhost:5000/module2) and start.py is the file to run the whole show.
I used flask-blueprint to get it all under the roof.
First let's get the modules to behave like modules. In module1/app.py I added:
from flask import Blueprint app1 = Blueprint('app1', __name__) ... @app1.route ...
For module2 app.py looks similar except that app1 is changed to app2.
So, now we have the blueprints, of which the project does not know yet. In fact we don't have any app so far. All the nutrs and bolts go into start.py:
from flask import Flask from module1.app import app1 from module2.app import app2 project = Flask(__name__) project.register_blueprint(app1, url_prefix='/path1') project.register_blueprint(app2. url_prefix='/path2') if __name__ == '__main__': project.run()
This is the beauty of blueprint (imho). Import the blueprint, register it and pu t it on a dedicated path.
Done. To modules in a flask-application.
0 notes
kodekitchen-blog 11 years ago
Text
gitweb - shorty
The Team demanded (or asked nicely) for a graphical overview of all the git repositories, so here is the quick way to do it:
Install gitweb
sudo apt-get install gitweb
Make an empty directory that is the root of all the repositories e.g. pub. This is necessary since git has no concept of a root repository holding others
mkdir pub/
Change owner to the user who owns the repositories
sudo chown -R git:git pub
Now we link the repositories into pub/ (Move to pub/ and do)
ln -s /path/repo1.git rep1 ln -s /path/repo2.git rep2
Now we open /etc/gitweb.conf and edit the variable to pub/
$projectroot = "/path/pub"
Now http://server/gitweb should show the list of repos. If not you probably have to edit $projectroot in /usr/share/gitweb/gitweb.cgi too.
0 notes
kodekitchen-blog 11 years ago
Text
git hooks - reel in
In the last post I sketched out a simple jabber-notification script for remote git repositories. There are some things, that can be improved there.
First I added an additional argument to exclude the commiter from the message queue. I know that I commited, so I don't have to be informed about that later (I updated my github repo). So, I have another argument in the call, but what now? In pushbot.py there is a dict to hold the name of the commiter (or email) as key and the jabberid as a value.
But that in itself is pretty useless, so we have to tweak the hook a little to give the name of the commiter as 2 parameter. This is best achieved in using
git log -1
which gives us the last commit entry. Better stil we can add a formatting instructions like this
git log -1 --pretty=format:"%ce"
which gives us the email-address of the commiting party. I will use this as the key in the pushbot dict holding the jabber-ids to which the push-notification shoudl be sent. I don't use the commiters name here, beccause of formatting hubub and the fact, that I am less likely to run into problems with doubles.
So, in pushbot.py I will add an email-address as a key
rcps_list={'email@server' : 'jabber@server'}
Now the commiter should not receive any message concerning his now commits. But still we could improve the notification message by using the very same git log statement.
In hooks/post-receive we could generate a more detailed message using
git log -1 --pretty=format:"%cn, %s"
Which gives us the name of the commiter and the subject line. Insert this into the message and you have a nice push notification with sufficient details to decide what you should do without too much overhead.
0 notes
kodekitchen-blog 11 years ago
Text
Git Hook, Line and Sinker
Selfhosting your git repositories is not a bad idea. In fact it is a great idea and it's pretty simple too.
First you make a new directory on an accessible machine which by convention ends on .git. Something like /allmycode/repo1.git
Move into the directory and execute
git init --bare --share
Great, we got ourselves a shareable git repository. If you dont' want ro be the only one to be working on that repository and have no intention of making it public either you should create a user specific for git operations on the machine you serve your repositories from. Let's assume your specialized user is called "git"
You can now add ssh-public-keys from all parties that should have access on the repos via copy-ssh-id to /home/git/.ssh/id_rsa.pub and have a nice passwordless access-control.
Now we can start to work on the remote repository. In you local working directory we
git init
and provide the user information that is used in the commit message
git config --global user.name "Your Name" git config --gloabl user.email [email protected]
This was all local, so let's add the information about remote
git remote add origin git@server:/allmycode/repo1.git
this enables us to make a push to remote with the shorter
git push origin master
It is completely viable to add differently labeled remote repositories e.g.
git remote add github sth@github
and push a specialised branch (without passwords for example) there via
git push github public
Nice, self-hosted remote repositories! You can start collaborating. And when you do you, you might want to automate transferring the newest version to a testing server. You could do this with a cronjob and some copying, or, you could use git's very own hooks, to be specific a post-push hook. Connect to the remote repository and enter the directory hooks/. Here you find some nice samples, but we want something different. We want a post-receive hook, which means everytime somebody pushes changes to the remote repository this action is called. So we create that hook:
touch post-receive
then we paste in
#!/bin/sh GIT_WORK_TREE=/path/to/serverroot/ git checkout -f
and save. Make it executable and you made a git hook. Congrats! Since we have a user named git who is the owner of all the repos on our remote machine we must add him to the group that controls the webserver paths (www-data or else) Full instructions to make the checkout work.
Now every push to the remote repository should trigger a checkout which hopefully makes the newest version available on the webserver.
But let's tweak things a little. Say we want to be notified whenever a commit has been pushed. Email and telephone are viable but timeconsuming and you don't want to, and frankly should not have to, bother. I think Jabber is a great way of getting the information across without spamming the whole team. So I made a little script to send a message to everybody who cares to give me his jabber-id. You can get it here via
git clone https://github.com/kodekitchen/punobo.git
If you add to the post-receive hook
python /<path-to-repo>/pushbot.py "Something has been pushed."
not only will your testing/demo/development server automatically have been updated, but all listed members of the working group will be informed about it on Jabber.
0 notes
kodekitchen-blog 11 years ago
Text
Business Card with Latex
So, I needed some business cards for a meeting but I rarely ever need more than 8 or so at a time (yes, I'm aware that they look less classy, but having some done would have taken to long) I decided to make some with Latex. So, here i swhat I have done:
First I declared a Xetex-Preamble to be able to use the fonts from my linux system and don't have to bother with encoding
\documentclass[a4paper,11pt]{article} \usepackage[cm-default]{fontspec} \usepackage{xunicode} \usepackage{xltxtra} \usepackage{graphicx} \setmainfont[Mapping=tex-text]{Ubuntu} \setsansfont[Mapping=tex-text]{Ubuntu} \setmonofont[Mapping=tex-text]{Cantarell}
Next I got rid of all elements that by default come with the article documentclass and redefined width and height of the paper to match an A4 sheet and some other dimensions.
\pagestyle{empty} \setlength{\unitlength}{1mm} \setlength{\paperheight}{297mm} \setlength{\paperwidth}{210mm} \setlength{\oddsidemargin}{-7mm} \setlength{\topmargin}{32mm} \setlength{\textheight}{280mm}
After that I declared all text elements that should be on the card.
\newcommand{\bcname}{Caspar David Dzikus} \newcommand{\bctitleA}{KodeKitchen Writer} \newcommand{\bctitleB}{} \newcommand{\bccontactA}{555-555-5555} \newcommand{\bccontactB}{[email protected]} \newcommand{\bccontactC}{http://kodekitchen.com} \newcommand{\bcsub}{coding and stuff}
The document itself is pretty straightforward: The card itself is a picture which is then repeated ten times (five rows, two columns) in another picture. To help cut the cards marks are placed in the corner of each card (which is 80 x50mm)
\begin{document} \begin{picture}(170,209)(0,0) \multiput(0,0)(0,50){5}{ \multiput(0,0)(80,0){2}{ \begin{picture}(80,50)(0,0) % marks for cutting \put(-1,0){\line(1,0){2}} \put(0,49){\line(0,1){2}} \put(-1,50){\line(1,0){2}} \put(0,-1){\line(0,1){2}} \put(80,49){\line(0,1){2}} \put(80,-1){\line(0,1){2}} \put(79,0){\line(1,0){2}} \put(79,50){\line(1,0){2}} \put(13,39.5){\textsf{\LARGE\bcname}} \put(13,34){\textsf{\scriptsize\bctitleA}} \put(13,31){\textsf{\scriptsize\bctitleB}} \put(13,24){\tt{\normalsize\bccontactA}} \put(13,19){\tt{\normalsize\bccontactB}} \put(13,14){\tt{\normalsize\bccontactC}} \put(55,8){\textsf{\scriptsize\bcsub}} \end{picture} } } \end{picture} \end{document}
And this is what you get
Tumblr media
0 notes
kodekitchen-blog 12 years ago
Text
Bundling in Python
There is a nice way to deal with requirements in Python, which is
pip freeze
with
pip freeze > requirements.txt
you can easily stor eyour dependencies in a simple txt-file and with
pip install -r requirements.txt
get pip to install requirements as specified in the file in a different environment.
1 note View note
kodekitchen-blog 12 years ago
Text
Indexing with Elasticsearch and Django
So, every decent webapp needs a search feature? Okay, here we go.
All starts with downloading elasticsearch After extracting start it with
bin/elasticsearch -f
The -f paramter gives you a little output, especially the port and host. By standard this would be localhost:9200.
So let's get to the Django bit. First thing to check is whether the model object you want to index for search has one or more foreign key fields. If so, you might not want to index the ids (it is very unlikely that some user would search for an id). So what to do? Since data is passed to elasticsearch as a JSON object we will use djangos built in serializer to convert our model object into a JSON object and then pass that on. The serializer provides an option to use something called natural keys, which is called by adding
use_natural_keys = True
to the serializers.serialize('json', modelObject) as a third element. The successfully use this, the model which the foreign key field references has to be extended by a method natural_key.
As an example let's say, we got to model classes one is product which has a foreign key field manufacturer which references a model of said name:
Manufacturer name address website... Product prod_id name manufacturer <- there it is, a foreign key to the above price...
So if we want to index products for search we may want the manufacturer field to be a name (or a name and address combination etc.). Therefore we define a method "natural_key" in the Manufacturer class i.e.:
def natural_key(self): return (self.name)
Thus when serializing a Product the "unsearchable" ID is converted to the manufacturer's name.
The general idea now is to pass the object as an serialized string to a function that then does the indexing on its own. Doing something ike this:
... new_product = Product(...) new_product.save() myIndexModule.add_to_index(serializers.serialize('json', [new_product], use_natural_keys=True))
So, now to the indexing itself. I use pyelasticsearch for no special reason except that its documentation seemed decent. The indexer is located in a module since I wanted it to be separated from the rest of the application and it is pretty short.
from pyelasticsearch import ElasticSearch import json ES = ElasticSearch('http://localhost:9200') def add_to_index(string): deserialized = json.loads(string) for element in deserialized: element_id=element["pk"] name = element["model"].split('.')[1] <- (this is to get rid of the module prefix but this is just cosmetics) index = name + "-index" element_type = name data = element["fields"] ES.index(index, element_type, data, id=element_id)
That's it. One could certainly do more sophisticated stuff (like plural for the index and singular for the element type and than do something clever about irregular plurals...) but it does the job.
Now let's use ElasticSearc as a datastore for an application.
But why should we do this. Let's assume we have an application with a member and a non-member area. Members can do stuff on a database and non-members can not. Since you want to keep the database load from user that do not add anything to your service to a minimum to provide a snappy experience for your members you don't want them to clog the connection with database requests and decide to let ElasticSearch handle that. And anyway, it's just for fun :-)
So the idea is to make an ajax call to elasticsearch and show a list of the last ten products added to the index to the user. In one of your views for non-members you put a javascript function like this:
$.getJSON('http://localhost:9200/product-index/_search?sort=added&order=asc&from=0&size=10', function(response){....})
and in the function you can now start to play around with the fields like
$.each(response.hits.hits, function(i, item){ item._source.name ... }
and present them to the users.
2 notes View notes
kodekitchen-blog 12 years ago
Text
Custom authentication in Django
After fiddling with Djangos auth-app for a while I decided t rather have my own (I know, why should one do this? Answer: To learn). It consists of several steps:
registration
activation
adding a password
login
First I created an app for user-management
$python manage.py startapp user_management
This gave me the structure to work with. First I created the usermodel:
from django.db import models import bcrypt class User(models.Model): email = models.CharField(max_length=100, unique=True) firstname = models.CharField(max_length=30) lastname = models.CharField(max_length=30) password = models.CharField(max_length=128) last_login = models.DateTimeField(auto_now=True) registered_at = models.DateTimeField(auto_now_add=True) core_member = models.BooleanField() activation_key = models.CharField(max_length=50, null=True)
The idea here was to have email as username and to have that unique. I don't consider usernameshis is a good choice for logins but rather a feature for profiles, but that depends on one's taste I think.
The registration view is pretty straight forward . I create a RegistrationForm object with fields for email, first and last name. The activation_key is simply a string of randomly chosen ASCII characters and digits. Activation itself is just creating a link, sending it and comparing the random part of the link and the stored string. If they match is_active is set to True and the user can set his/her password. For passwords I normally store bcrypt hashes in the database (NEVER! store plaintext passwords in a database!). This is quite simple and can be done by following this description.
The function for setting the password goes into the model. For this to work I use a classmethod. As the name suggests, this is a method bound to the class, not an instance of said class which allows to get objects as in "cls.objects.get()" which is the classmethod's equivalent to self.something in instance methods.
@classmethod def set_password(cls, user_id, plain_pass): secret = bcrypt.hashpw(plain_pass, bcrypt.gensalt()) user = cls.objects.get(pk=user_id) user.password = secret user.save() return True
The login process itself is done via another classmethod which I named authenticate:
@classmethod def authenticate(cls, email, password, request): user = cls.objects.get(email__exact=email) if bcrypt.hashpw(password, user.password) == user.password: request.session['user_id'] = user.id user.save() # this is to get last_login updated return user else: return None
(In order for this to work you have to enable the session middleware and the session app in settings.py.)
So, a quick rundown.
Since I use email as an unique identifier for the login the function expects an email address which is used to find the person to authenticate, the plaintext password (e.g. as given from a inputfield) and the request object to make use of a session. (I use database session handling for development but there are alternatives described in the django docs.)
The bcrypt function returns True if given plaintext password hashed and the stored hash match False if not.
After haveing checkd that the user has given the right credentials I'm going to store the user_id in the session which allows me to get the full set of user information should I need it.
I save the user to trigger the auto_now function of the user model in which updates the last_login field to the actual time.
Now with
User.authenticate(email, password, request)
the user is logged in.
3 notes View notes
kodekitchen-blog 12 years ago
Audio
Lots of code on here was written with the music of this artist :-) Listen/purchase: Into The Trees by Zoe Keating
0 notes
kodekitchen-blog 12 years ago
Text
Setting up my own flavour of Django
Okay, so I started doing stuff in python and of course stated playing around with django. And beeing used to padrinorb's convenient generators, I had to figure out how to get to my preferred setup. This is what I do:
Run django-admin.py startproject projectname
in settings.py add import os.path and add os.path.join(os.path.dirname(__file__), ('templates')) to TEMPLATE_DIRS
Make dir templates/ in the project folder
Make dir views/ in the project folder
Add an __init__.py file
import your views in __init__ (e.g. from index import hello if you have a view file called index.py containing a function hello())
In templates I put subdirs for all sites and a base.html which holds the frame for all sites.
Now in urls.py import all views via from views import *
So, this gives me a view and a template dir as well as a frame for the sites. Now that I got the views and template going I would like to have a seperate dir for static contents. Django's static dir is simply /static whih is fine by me, but making a directory named static and putting stuff in won't do. You have to put
STATICFILES_DIRS = (os.path.join(os.path.dirname(__file__), 'static/'),)
After putting
{% load staticfiles %}
into the base.html. You can insert static files like css, image and so on by putting
{% static foo/bar.ext %}
into the template tag.
2 notes View notes
kodekitchen-blog 12 years ago
Text
Legacy code and the "SuperProgrammer"
I started an online python course some days ago and part of the assignment is to peer evaluate other peoples code. The task was to print a message on the screen. Yes, I know, a boring task. There I came upon something like this: string = "xdxlxrxoxW xoxlxlxexH" string = string[::-2] print string And this, in three lines, is the essence of problems I've encountered over the years with big complex projects and legacy code. Remarkably it seems to be a trap each projects "Super-Programmer" falls in... 1. Show-off programming It's okay to be proud of ones knowledge but, come on, this is about the job, not your ego. 2. The code is the documentation NO, definitely not, code is just a small part of any bigger or more complex project. There ususally are configuration, directory structures, external dependencies (libraries) etc. Put it somewhere to be seen, the init file, a readme, a getting-started txt file but don't assume. 3. Don't oversmart You found this very cool, super cryptic looking function that does unexpected thing... Yeah, probably use something that can be understood right away or at least leave a comment about what it does. 4. Modularize to death Especially in ruby (but any other language as well) I found many people building modules around simple functions, meta programming things to bits and doing stuff they found in years old posts somewhere. Those techniques are all good and useful at times but not every function is predestined to be reused in another project, so why not declare it a helper function? In short: 1. Write code that can be read with by an average coder, not just by the "Super-Programmer". Projects or companies dev teams seldom have an even knowledge distribution. (And in most cases you don't even want that.) 2. Documentation! 3. Comments! 4. Put your ego aside. I rarely think stuff like "Oh my, he/she came up with a fancy solution", mostly it is along the line of "WTF! Why didn't he use the obvious solution?" So if there is a reason for doing it differently go back to point 2 or 3.
4 notes View notes
kodekitchen-blog 12 years ago
Text
Ruby Shortcuts
I sometimes stumble upon little snippets of ruby code that can replace longer loops. Like this one: array.reject!(&:empty?) which eliminates all empty strings from an array of strings (does not work for Fixnum), which is quite handy following a "split" operation. Another one is this: array.inject(:+) which sums up all elements in an array and also works for strings.
0 notes
kodekitchen-blog 12 years ago
Text
Automating virtualization with veewee and vagrant
In order to consolidate my development environment a friend (thank you @bascht) mentioned veewee to me. Veewee automates and simplifys vagrant basebox creation. It comes with a plethora of templates for operating systems and versions. I use it to quickly build Ubuntu server on VirtualBox. I started with a standard template (vagrant basebox define 'appbox' 'ubuntu-12.10-server-amd64') and customized the postinstall.sh. Including replacing the ruby install with rvm. This leads to problems when using chef which I don't necessarily use so I can live with the inconvenience rather than with having to ruby setups. I keep my own definitions for veewee in a separate directory to be able to version control it. Basebox creation itself is done by a little script. This setup requires git, and VirtualBox to be installed. I clone veewee parallel to the directory holding my definitions. cd .. BASE=$(pwd) echo "\n\nBuilding dbesbox and exporting\n\n" if [ ! -d $BASE/veewee/definitions ]; then mkdir $BASE/veewee/definitions/ fi cp -r $BASE/my-devenv/appbox/ $BASE/veewee/definitions/appbox/ cp -r $BASE/my-devenv/databasebox/ $BASE/veewee/definitions/databasebox/ cd $BASE/veewee/ veewee vbox build 'appbox' --force veewee vbox export 'appbox' --force echo "\n\nDone building and exporting appbox\n\n" mv $BASE/veewee/appbox.box $BASE/my-devenv/ veewee vbox build 'databasebox' --force veewee vbox export 'databasebox' --force echo "\n\nDone building and exporting databasebox\n\n" mv $BASE/veewee/databasebox.box $BASE/my-devenv/ cd $BASE/my-devenv/ veewee alias delete vagrant #Eradicate the "Gemfile could not be found" error source ~/.zshrc After building the two boxes I add them with vagrant box add 'appbox' 'appbox.box' and databasebox accordingly. My Vagrantfile looks like this: Vagrant::Config.run do |config| config.vm.define :app do |app_config| app_config.vm.box = "appbox" app_config.vm.network :hostonly, "10.10.1.2" #10.10.1.1 is the host app_config.vm.provision :shell, :path => "provisions/common_base.sh" app_config.vm.provision :shell, :path => "provisions/app_base.sh" end config.vm.define :db do |db_config| db_config.vm.box = "databasebox" db_config.vm.network :hostonly, "10.10.1.3" db_config.vm.provision :shell, :path => "provisions/common_base.sh" db_config.vm.provision :shell, :path => "provisions/db_base.sh" end end My provisioner is shell, but a 禄gem install chef芦 would enable you to use chef subsequently to provision. (Since rvm installation requires a closing and restarting of the shell or login/logout I found this to be more reliable.) Now a simple 'vagrant up' starts both boxes and pointing the browser at 10.10.1.2 is answered by the nginx in appbox. My application is configured to use the db server at 10.10.1.3
0 notes
kodekitchen-blog 12 years ago
Text
Update on the .zshrc
I needed a little more information from my prompt so I extended it a bit HISTFILE=~/.histfile HISTSIZE=1000 SAVEHIST=1000 setopt autocd setopt promptsubst autoload -U colors && colors autoload -Uz vcs_info && vcs_info precmd() { vcs_info } zstyle ':vcs_info:*' enable git hg bzr zstyle ':vcs_info:*' check-for-changes true zstyle ':vcs_info:*' get-unapplied true zstyle ':vcs_info:*' unstagedstr "!" zstyle ':vcs_info:*' formats "%F{5}[%s:%r|%b]%u" zstyle ':vcs_info:*' actionformats "%F{5}[%s:%r|%b-%a]" PROMPT="%F{2}%n@%M:%F{6}%d%F{11}>> " RPROMPT='${vcs_info_msg_0_}' The result is a shell that lets me switch into a directory without typing cd and if the dir is version controlled it shows me the versioning system, the repo name, the branch I'm on and whether there are unstaged changes (indicated by !)
Tumblr media
2 notes View notes
kodekitchen-blog 12 years ago
Text
Datamapper - Padrino - warden
I took a break from coding, but was still looking for a useful set of tools for developing web applications. And I think I found a solution that fits my needs (small core but extensible, modular, reasonable features, usable documentation or active user groups at least). The goal was to create a backend that would output json objects that could be processed in an independent frontend. First step was to generate a project following the guide padrino g project -d datamapper -a mysql -e none I set renderer (-e option) to none because I am using rabl for templating the json output. For authentication I chose warden. So I added these to the Gemfile gem 'warden' gem 'rabl' Then turned to the app/app.rb and added use Warden::Manager do |manager| manager.default_strategies :password manager.failure_app = myApp end Warden::Manager.serialize_into_session do |user| user.id end Warden::Manager.serialize_from_session do |id| User.get(id) end For creating the model I used the padrino generator again, since the user model is pretty straight forward (extend as needed) padrino g model User username:string password:string email:string After setting up config/database.rb you can create the database by using padrino rake dm:create To have some entries in the database to work with I costumize the db/seeds.rb which is mentioned in the padrino blog tutorial Having done this warden should be in the system but is not working yet, since we have to define at least one strategy: For now I like to use a common username/password login, which is already defined as default in manager.default_strategies. (You could add others if you wanted to, look at the warden-wiki for details) Warden::Strategies.add(:password) do def valid? ... code goes here ... end def authenticate! ... code goes here ... ? success!(user) : fail!("Invalid") end end So in valid? you would define the requirements that have to be met to go on with the authentication process. In this case checking params["username"] && params["password"] would make sense. After creating a usable authentication! function request to a controller can be authenticated via adding env['warden'].authenticate! before the login controller code. If authentication was successful you can add env['warden'].authenticated? to following controllers and get the user (or what you decided to return for success) by calling env['warden'].user. I tested this with curl, since the frontend is intended to be independent. I put the login process in a post route, so after starting padrino curl -d "username=...&password=..." localhost:3000/login gave me the defined output of a successful login. One pitfall when testing a subsequent controller with curl is that in contrast to a browser you have to add the cookie information. In order to get it you could call curl -vvv -d "username=...&password=..." localhost:3000/login and can extract the rack.session=... ...; and call the controller with curl --cookie "rack.session=... ...;" localhost:3000/subsequent_controller
1 note View note
kodekitchen-blog 13 years ago
Text
Sinatra, Datamapper and CouchDB - a trainwreck
Just for purely educational reasons (why not) i decide to combine these three on heroku. The spoiler is already in the title: it didn't work at all. Datamapper is quite cool and works great with the shared Postgres database, but don't even bother to get the mongo or couch adapter working. Datamapper seems to gets massively reworked, maybe wait for this to be done and then try again. As for Postgres or MySQL it looks nice.
45 notes View notes
kodekitchen-blog 13 years ago
Text
Sinatra, Mustache and Heroku - change of plans
Okay, I really like Mustache, it's clean, it has a reasonably small set of options (conditionals, lists etc.) and has many implementations. One of them being javascript (Mustache.js). So, i thought what about using Sinatra to generate JSON responses and serving templates, while the client is responsible for the representation. The benefit being a clean RESTful application and a deferred rendering process allowing to play with different options on caching JSON objects and exploring AJAX features. And above all a fun project. So, the basic idea:
the root path "/" reads an index html file and hands it out
navigation elements trigger to asynchronous AJAX requests (one for the template, one for the view)
when both are finished mustache.js kicks in and renders the page (or parts of it)
app.rb
get "/" do response = File.open("index.html").read "#{response}" end
index.html
... var template = null; var view = null; var templateFinished = false; var viewFinished = false; function getTemplate(template){ [...] xhRequest [...] if(request.readyState == 4){ template = req.responseText; templateFinished = true; process(); } } function getView(view){ [...] xhRequest [...] if(request.readyState == 4){ view = req.responseText; viewFinished = true; process(); } } function process(){ if(templateFinished == true && viewFinished == true){ document.getElementById("content").innerHTML = Mustache.to_html(template,view); } else{ return; } }
So, in my case the rendered stuff replaces the content in the div with the id "content". To elaborate on that one could pass the id of the element to be replaced as an option to the function and really play with this.
36 notes View notes