Archive for the ‘.NET’ Category
Compile and run FreeSWITCH in Raspberry pi
In recent days, I am spending my free time learning SIP and RTP protocols. In order to progress with my learning, I decided to setup FreeSWITCH. As usual decided to use one of my RPI and compile the system from source. Compiling from source will give me some basic understanding of the binaries and its dependencies.
First task was to install all the dependencies, followed this link to set up the deb etc but I always get below error. I do not have much idea how to fix it. So ignored this step and decided to manually install the dependencies.
Hit:1 http://deb.debian.org/debian bullseye InRelease
Get:2 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]
Hit:3 http://security.debian.org/debian-security bullseye-security InRelease
Hit:4 http://archive.raspberrypi.org/debian bullseye InRelease
Ign:5 https://freeswitch.signalwire.com/repo/deb/rpi/debian-release `lsb_release InRelease
Err:6 https://freeswitch.signalwire.com/repo/deb/rpi/debian-release `lsb_release Release
404 Not Found [IP: 190.102.98.174 443]
Reading package lists... Done
E: The repository 'https://freeswitch.signalwire.com/repo/deb/rpi/debian-release `lsb_release Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Install dependencies
Did some google search and got some basic dependencies to compile the code, but had to install dependencies while doing configure and make step. Below is the full set of dependencies I installed to compile the FreeSWITCH source and dependable source code. If I missed any, please update it in the comment section.
sudo apt-get install build-essential
sudo apt-get install git-core build-essential autoconf automake libtool libncurses5
sudo apt-get install libncurses5-dev make libjpeg-dev pkg-config unixodbc
sudo apt-get install unixodbc-dev zlib1g-dev libtool-bin
sudo apt-get install libcurl4-openssl-dev libexpat1-dev libssl-dev sqlite3
sudo apt-get install libsqlite3-dev libpcre3 libpcre3-dev libspeexdsp1
sudo apt-get install libspeexdsp-dev libldns-dev libavformat-dev ffmpeg
sudo apt-get install libedit-dev python3.9-distutilss cmake libswscale-dev
sudo apt-get install liblua5.1-0-dev libopus-dev libpq-dev libsndfile-dev
sudo apt-get install uuid uuid-dev
Compile source code dependencies
In order to compile FreeSWITCH, first we need to compile the below projects.
sofia-sip
cd /usr/src
sudo git clone https://github.com/freeswitch/sofia-sip
sudo ./bootstrap.sh
sudo ./configure
sudo make
sudo make install
spandsp
cd /usr/src
sudo git clone https://github.com/freeswitch/spandsp
sudo apt-get install libtiff-dev
sudo ./bootstrap.sh
sudo ./configure
sudo make
sudo make install
libks
cd /usr/src
git clone https://github.com/signalwire/libks
cd libks
cmake .
make
make install
signalware-c
cd /usr/src
sudo git clone https://github.com/signalwire/signalwire-c
cd signalware-c
sudo cmake .
make
make install
Compile FreeSWITCH
Source code compilation will take some time, be patient.
cd /usr/src
sudo git clone git://git.freeswitch.org/freeswitch.git -bv1.10 freeswitch
cd freeswitch
sudo ./bootstrap.sh
sudo ./configure
sudo make
sudo make install
sudo make cd-sounds-install cd-moh-install
Once all the above steps are completed, you can go through the post install steps. I only done the owner permission step, as I intent to run from a command line and not as a daemon.
Switching from Windows to MacBook
I was an avid user of Windows for over 20+ years. My windows machine is old and may give up on me any time soon, so I started looking for a powerful machine for my development needs. I almost fixed on a dell laptop but I also started looking at MacBook as well. At the end decided to shift to Mac after seeing the reviews about the performance of M1 chip. Another reason is, I can run Visual studio in mac and thats one of the important requirement for me, VS 2022 is still a preview release for mac but it will stable soon. I am using iPhone for so many years and I know the basics of how apple’s system works.
Ordered a MacBook Pro M1 Max with 64gb, as its a custom made waited for almost a month and the seller told me that the delivery will take more time than they expected, they offered me a 32gb version instead. I did some research to see any performance impact if I downgrade to 32gb. Come across some youtube videos where people are stress testing both 32 and 64gb version and I could see not much difference between both configuration. So decided to go ahead with 32gb.
Getting started with mac
Initially it was difficult. I didn’t had any clue how to access the windows explorer equivalent in mac. Next hurdle was with the keyboard shortcuts. I started googling to understand the mac os and the shortcuts, at the end, got a fair idea of how the system works.
Got to know about finder, launchpad, spotlight etc works, these are some of the basics applications of mac os I experimented first. Next learning was how to access applications that are running. I had to learn more about using trackpad, those three finger and two finger gestures. I was so amazed how the trackpad works, its very convenient and so very responsive. I never experienced such a trackpad in any windows based laptops.
Setting up the system
I am able to setup my development system without much of a hurdle, I mostly use VS Code, Visual studio, docker, nodejs, golang etc. I didn’t had any trouble setting up any of the application I use on a daily basis.
Performance of the system
There are so many benchmarks available in internet and I am not going repeat it but MacBook pro is blazingly fast. I haven’t experienced any kind of performance delay even when the memory utilisation has reached close to 28gb. Its one of the fastest laptop I ever used so far. System startup is quick just like a smart phone. Battery life is very impressive, I haven’t tested the full duration but I could easily work for 6-7 hrs without plugging. I have been using this mac for couple of days and never heard the sound of the cooling fan.
The retina display is one of the best, its so crisp and clear. I can read even if I am a little away from the display.
Conclusion
I thoroughly enjoy the experience of using MacBook. Its build quality is superior. Initially started with my logitech wireless mouse but soon realised, its easy to use built in trackpad than mouse. I don’t know yet how to quickly access running programs and other options trackpad provides via mouse, switched off the mouse and started using trackpad. So far I am really satisfied with my MacBook.
I would like to say, I love the keyboard short cuts in windows than mac. For e.g. press End or Home button to move to the end of line or go back to the start of line and other shortcuts. Hope by practice I will be more comfortable with Mac keyboard.
I have been using Mac for less than a week, I will update this blog post once I use Mac for some more time.
Go plugin file size optimization
Now a days most of my pet projects are developed in golang. Recently I started working on a project to read data from some sensors to do some automations at my farm. The program will be running in a raspberry pi. I designed the system to utilize plugin architecture, this approach allow me to expand the functionality with ease. When I start adding plugins, I realized that the plugin binary size is more than I expected. As we have fast internet connectivity these size will not cause much harm but when the system is deployed in a place where internet connections are really slow then the file size really matters. This blog is about how I managed to reduce the plugin file size.
If one designing a pluggable system then real care should be given to the plugin code base. Design the plugin in such a way to reduce import of packages. Golang plugins are self sufficient like any other go binary. So what ever the packages imported will add size to the plugin. To make thing more clear, I wrote a small application which loads several plugins and print hello world.
Lets write some code
Here is the sample plugin looks like
import "fmt"
type Plugin struct {
}
func (d Plugin) Write() {
fmt.Println("Hello world")
}
var P Plugin
Like above created another plugin to write hello world in German
Build the plugin
go build -buildmode=plugin -o writers/plugins/en.so writers/plugins/en/en.go
go build -buildmode=plugin -o writers/plugins/de.so writers/plugins/de/de.go
Now lets examine the size of the plugin
ls -lh ./writers/plugins/*.so
-rw-r--r-- 1 pi pi 3.3M Apr 25 13:57 ./writers/plugins/de.so
-rw-r--r-- 1 pi pi 3.3M Apr 25 13:57 ./writers/plugins/en.so
As you can see each plugin is 3.3mb.
Build again this time let’s use ldflags
go build -ldflags="-s -w" -buildmode=plugin -o writers/plugins/en.so writers/plugins/en/en.go
Check the size again
ls -lh ./writers/plugins/*.so
-rw-r--r-- 1 pi pi 3.3M Apr 25 14:28 ./writers/plugins/de.so
-rw-r--r-- 1 pi pi 2.4M Apr 25 14:28 ./writers/plugins/en.so
Building with ldflags reduces the size of en.so plugin by 1mb
Lets run upx to this binary
sudo apt-get install upx
chmod +x ./writers/plugins/en.so
upx -9 -k ./writers/plugins/en.so
Check the file size again
ls -lh ./writers/plugins/*.so
-rw-r--r-- 1 pi pi 3.3M Apr 25 14:28 ./writers/plugins/de.so
-rwxr-xr-x 1 pi pi 2.1M Apr 25 14:28 ./writers/plugins/en.so
Running upx reduce the size by 0.2mb
This is the max reduction I can get with different build optimization.
Refactor the code
This is where we need to redesign the plugins and should keep refactoring the code to reduce package imports.
Where this size of plugin comes from? Its the import of fmt package. If I comment fmt.Println
and build using ldflags and running upx will reduce the plugin size to 893k
ls -lh ./writers/plugins/*.so
-rw-r--r-- 1 pi pi 3.3M Apr 25 14:49 ./writers/plugins/de.so
-rwxr-xr-x 1 pi pi 893K Apr 25 14:49 ./writers/plugins/en.so
So how to keep the file size optimum and achieve the result we need. Interface comes to our rescue.
Lets create an interface, this is just a sample code and not following any naming conventions here
type Plugger interface {
Print(a ...interface{})
}
Every plugin should now relay on this interface to print hello world. See the refactored en plugin
type Plugin struct {
}
func (d Plugin) Write(plugger writers.Plugger) {
plugger.Print("Hello world")
}
var P Plugin
Here is the method that satisfies Plugger interface. This function should be outside of plugins package
import (
"fmt"
)
type PluginUtil struct {
}
func NewPluginUtils() PluginUtil {
return PluginUtil{}
}
func (p PluginUtil) Print(a ...interface{}) {
fmt.Println(a...)
}
Check the size of plugin again
-lh ./writers/plugins/*.so
-rw-r--r-- 1 pi pi 1.5M Apr 25 15:11 ./writers/plugins/de.so
-rwxr-xr-x 1 pi pi 897K Apr 25 15:11 ./writers/plugins/en.so
Source code: https://github.com/sonyarouje/goplugin
Expo react-native development in Docker
I spent most of my free time learning developing applications in different platforms. Recently I was spending time in Expo, a platform to build react-native apps. Expo is a pretty good platform to kick start your react native development. One of the difficulty I always face is upgrading the versions, for e.g. some times expo releases multiple updates in a month. When upgrading in my Windows machine, there could be issues, either a file lock or some thing else. These installations issues leads to frustrations and fire fighting to return to a working state. Recently my friend Sendhil told me, how he use VS Code to remote develop using containers. I decided to take a look at it.
I kept myself away from docker for some time. Decided to try out docker again. It took me few mins to up and running a docker image maintained by node. Next step was to install the expo-cli and other dependencies to run my expo test application. I had to over come several errors popped up when running expo code in a container. Spend hours reading forums and posts to resolve it one by one. Here is the Dockerfile I came up, which can be used to develop any expo based applications.
The below workflow holds good for any kind of node or react or react-native, etc developments.
Dockerfile
FROM node:10.16-buster-slim LABEL version=1.0.0 ENV USERNAME dev RUN useradd -rm -d /home/dev -s /bin/bash -g root -G sudo -u 1005 ${USERNAME} EXPOSE 19000 EXPOSE 19001 EXPOSE 19002 RUN apt update && apt install -y \ git \ procps #used by react native builder to set the ip address, other wise #will use the ip address of the docker container. ENV REACT_NATIVE_PACKAGER_HOSTNAME="10.0.0.2" COPY *.sh / RUN chmod +x /entrypoint.sh \ && chmod +x /get-source.sh #https://github.com/nodejs/docker-node/issues/479#issuecomment-319446283 #should not install any global npm packages as root, a new user #is created and used here USER $USERNAME #set the npm global location for dev user ENV NPM_CONFIG_PREFIX="/home/$USERNAME/.npm-global" RUN mkdir -p ~/src \ && mkdir ~/.npm-global \ && npm install expo-cli --global #append the .npm-global to path, other wise globally installed packages #will not be available in bash ENV PATH="/home/$USERNAME/.npm-global:/home/$USERNAME/.npm-global/bin:${PATH}" ENTRYPOINT ["/entrypoint.sh"] CMD ["--gitRepo","NOTSET","--pat","NOTSET"]
VS Code to develop inside a container
To enable VS Code to develop inside a container, we need to Install Remote Development Extension pack. Here is the more detailed write up from MS
To enable remote development we need two more files in our source folder.
- docker-compose.yml
- devcontainer.json
docker-compose.yml
version: '3.7' services: testexpo: environment: - REACT_NATIVE_PACKAGER_HOSTNAME=10.0.0.2 image: sonyarouje/expo-buster:latest extra_hosts: - "devserver:10.0.0.2" command: "--gitRepo sarouje.visualstudio.com/_git/expotest --pat z66cu5tlfasa7mbiqwrjpskia" expose: - "19000" - "19001" - "19002" ports: - "19000:19000" - "19001:19001" - "19002:19002" volumes: - myexpo:/home/node/src volumes: myexpo:
-
REACT_NATIVE_PACKAGER_HOSTNAME: Will tell react-native builder to use the configured ip when exposing the bundler, else will use the docker container’s ip and will not be able to access from your phone.
-
command: Specify your git repo to get the source code and the pat code. When running docker-compose up, docker container will use these details to clone your repo to /home/dev/src directory of the container.
-
volumes: Containers are short lived and stopping the container will loose you data. For e.g. once the container is up we might install npm packages. If the packages are not able to persist then we need to reinstall packages every time we start the container. In order to persist the packages and changes, docker-compose creates a named volume and keep the files of /home/dev/src in the volume and can be accessible even after docker restart.
Keep in mind ‘docker-compose down’ will remove the volume and we need to reinstall all the packages again.
devcontainer.json
Create a new folder named .devcontainer and inside the new folder create a file named devcontainer.json. Below is the structure of the file.
{ "name": "expo-test", "dockerComposeFile": "../docker-compose.yml", "service": "testexpo", "workspaceFolder": "/home/dev/src/expotest", "extensions": [ "esbenp.prettier-vscode" ] "shutdownAction": "stopCompose" }
- dockerComposeFile: will tell where to find the docker-compose.yml file
- service: Service configured in docker-compose.yml file
- workspaceFolder: Once VS Code attached to the container, will open this workspace folder.
extensions: Mention what all the extensions need to be installed in VS Code running from the container.
Work flow
- Download the latest version of docker
- Open powershell/command prompt and run ‘docker pull sonyarouje/expo-buster’
- Open your source folder and create docker-compose.yml and .devcontainer/devcontainer.json file
- Modify docker-compose.yml and give the git repo and pat, etc
- Open VS Code in source folder. VS Code will prompt to Reopen in Container, click Reopen in Container button. Wait for some time, and VS Code will launch from the container.
- Once launched in container, all your code changes will be available only in the container. Make sure to push your changes to git before exiting the container.
Advantages of containerized approach
We can spawn a new container at ease and test our code against any new version of libraries we are using. We don’t need to put our dev machine at risk. Any break or compilation issues, we can always destroy the container and go back to the dev container and proceed with our development. No need to restore our dev machine to a working state. If the upgrade succeed then we can always destroy the current dev container and use the new container as the development container. No more hacking with our current working container.
Where is the source?
All the dockerfiles and scripts are pushed to git. Feel free to fork it or give me a pull request in case of any changes. I created two versions of docker file, one for alpine and one for buster. As of now stable VS Code release wont support alpine but you can always switched to VSCode insider build to use alpine.
Docker image is published to docker hub, can be pulled using sonyarouje/expo-buster or sonyarouje/expo-buster:3.0.6. Here 3.0.6 is the version of expo-cli.
Aeroponic V3 – controlled by Arduino an overview
Last couple of months I was building a new version of my Aeroponic controlling system. This time I dropped Raspberry pi and moved to Arduino. One of the reason I moved to Arduino is, it’s a micro controller and has no OS. So the system will not crash, in case of power failures. Raspberry pi is on the other hand runs Linux and frequent power failures might damage the OS. The new system has all the features of my old version, plus some additional features.
Overview
I decided to use Arduino Nano, for my development. Nano has a small foot print and can plug into a PCB. I also designed a PCB to hold all the pieces together, will see the PCB shortly.
I went through several iterations of PCB design. Initially I started with onboard relay modules, later I decided to remove on board relay modules and plug external relay modules. The reason to use external relay is, I can change the relays depends on the water pump’s ampere. Also I can easily change relays if it got fried.
Mobile Application: Just like last version I created an Android app to control the system but this time I wrote a native app, previously I used Cordova to build the app.
Communication: Mobile app and Arduino communicates via bluetooth. I used HC-06 bluetooth module. To make the system simple, I skipped WiFi module. May be in later version I can include WiFi or I can use Arduino MKR1000 which has inbuilt WiFi.
Power: The system runs in 12V DC. The board can be powered in two different ways, either connect a 12V power adapter with a standard 2.1mm barrel jack or use a DC converter and supply power via normal screw terminal.
Features of the Controller system
Controlling Water Pump: One of the crucial part of Hydroponic/Aeroponic system is the cycling of water in periodic intervals. A water pump is used to cycle the water. The controller should be able to switch on motor in a particular interval and keep it on for a configured time. Say run motor every 30 mins and keep it on for 3 mins. This settings can be configured from the mobile application.
Nutrient Feeder: In Aeroponic/Hydroponic the fertilizers (called as nutrients) are mixed into the water. In normal scenario we need to add it manually, the system uses two dosage pumps to add nutrients. We can add nutrients two way, either via the mobile app or by manually pressing a button. Through mobile app, we can specify how may ml of nutrients need to mixed to water.
Nutrient Mixer: Used a small wave maker to mix the nutrients while adding it.
Maintain Reservoir Water Level: One of the important thing to consider is, the water pump should not dry run, if it does then ready to buy a new one. In this version, used water level sensors to know the water level. The system used a solenoid valve, which is connected to a water source. When the water level goes down to a set level, system will activate the valve and start filling the reservoir. Once the water reaches a set level, system will switch off the valve.
PCB
I spent a lot of time in designing the board and come up with a very simple board with pluggable external relay modules. I am a beginner in PCB and electronics world. I had to spend my nights assembling the system in a bread board to see how each components behave. For me programming is easy but not playing with electronic components. At last I come up with a board design. Next big task was to find a shop to manufacture the prototype board. I was in touch with so many vendors and some never responded. I choose Protocircuits to do the PCB manufacturing.
Protocircuits manufactured a beautiful board for me. I etched several boards at home but this was awesome. I spend another night to solder the components to the board, see the assembled board below.
Here Arduino and Bluetooth modules are not soldered instead plugged to a female header. External relay modules can be plugged via screw terminals.
About Protocircuits
I had a very good experience with Protocircuits. They are very professional in dealing with me and answering all my queries. I should thank Jeffrey Gladstone, Director Business development for his prompt replies and answering to all my queries. If anyone want to prototype a board, I highly recommend Protocircuits. You can reach them at info@protocircuits.in
Buying Components: I highly recommend to buy any electronic components directly from the market than from any ecom providers. I did a comparison with price in the market and some online electronic shops and the price was very less in market. Take an e.g. of a chip 24LC256, in ebay.in it cost 100rs for one, from market I bought the same for 40rs. If you are in Bangalore, take a ride to SP Road and I am sure you will get all the components you want.
Node js error handling using modules inherited from EventEmitter
In node js most of the operations are asynchronous and normal try-catch will not work. In case of any error and we didn’t handle it properly, our node process will crash. In this post I will explain how to handle errors properly using EventEmitter.
In this e.g. we are going to read some data from a SQLite db, I use dblite as my node module to deal with SQLite db. For this e.g. I created a UserRepository module and inherited from EventEmitter
UserRepository.js
var dblite = require('dblite'); var util = require('util'); var EventEmitter = require('events').EventEmitter; var UserRepository = function () { var self = this;
self.getUser = function (userid, callback) { db.query('select * from USER where USER_ID=:id', [], { id: userId }, function (err, rows) { if (err) publishErr(err); callback(rows); }); }; var publishErr = function (err) { self.emit('error', err); }; }; util.inherits(UserRepository, EventEmitter); module.exports = UserRepository;
Using util.inherits, we inherits the UserRepository module from EventEmitter. Later we export that module to use in other node modules. The publishErr() function will emit an ‘error’ event in case of any error and calling module can subscribe to that event and handle the error.
Let’s use the above module in another node module. Say an express rest api.
restApi.js
var express = require('express'); var bodyParser = require('body-parser')
var UserRepository = require('./UserRepository'); var userRepo = new UserRepository();
var app = express(); app.use(bodyParser.json()); app.listen(8080); userRepo.on('error', function (err) { console.log(err); }); app.get('/users/;id', function (req, res) { userRepo.getUser(req.params.id, function (record) { res.send(record); }); });
Let’s go through the lines in bold.
Here we are creating the instance of UserRepository module.
var UserRepository = require('./UserRepository'); var userRepo = new UserRepository();
The calling module should subscribe for the error event, that’s what we are doing here. I am just writing the error to the console. You can do whatever you want with that err object.
userRepo.on('error', function (err) { console.log(err); });
This way we can ensure that our errors are handled properly in an elegant way.
Node js has an uncaughtException event and use it as a last resort.
process.on('uncaughtException', function (err) { console.log(err); })
Chat application using SignalR 2.1
Around two years ago I published a post about SignalR. Recently some readers requested an updated post using SignalR 2.1, here is the updated one.
For this post I created a very simple chat application that hosted in IIS, developed in .NET 4.5. Below is the project structure.
-
SignalrTracer.ChatServer: An empty ASP.NET Web Application which host the SignalR hub.
-
SignalrTracer.Publisher: A class library project that has SignalR hubs. I created this project just to isolate SignalR hubs from the ChatServer.
-
SignalrTracer.ChatClient: Another empty ASP.Net Web Application act as the client.
SignalrTracer.Publisher
As I mentioned above, this project contains the SignalR hubs. We can add SignalR framework using Nuget package manager.
Open Package Manager Console from Tools->Nuget Pacakger Manager and choose SignalrTracer.Published as the Default Project in the console window, then enter
PM> Install-Package Microsoft.AspNet.SignalR.Core
The command will add the SignalR and dependent frameworks. It’s time to create our chat hub.
ChatHub.cs
using Microsoft.AspNet.SignalR; namespace SignalrTracer.Publisher { public class ChatHub:Hub { public void Subscribe(string chatId) { Groups.Add(Context.ConnectionId, chatId); } public void Publish(string toChatId, string message) { Clients.Group(toChatId).flush(message); } } }
SignalR 2.1 relies on OWIN to host it. For that we need to create a Startup class as shown below.
using Microsoft.Owin; using Owin; using Microsoft.AspNet.SignalR; [assembly: OwinStartup(typeof(SignalrTracer.Publisher.Startup))] namespace SignalrTracer.Publisher { public class Startup { public void Configuration(IAppBuilder app) { // For more information on how to configure your application, visit http://go.microsoft.com/fwlink/?LinkID=316888 var hubConfig = new HubConfiguration(); hubConfig.EnableDetailedErrors = true; hubConfig.EnableJSONP = true; app.MapSignalR("/chatserver", hubConfig); app.Run(async context => { await context.Response.WriteAsync("Chat server started"); }); } } }
That’s it we created our SignalR hub. Let’s host it in our ChatServer.
SignalrTracer.ChatServer
This project is a OWIN host, to do that we need to refer another Nuget package called Microsoft.Owin.Host.SystemWeb.
Open the Nuget Package Manager Console and set Default project as SignalrTracer.ChatServer, then enter
PM> Install-Package Microsoft.Owin.Host.SystemWeb
Once all the package installed, just refer SignalrTracer.Publisher project and run the project. If every thing is fine then you can see an Internet Explorer with a string Chat server started. This means SignalR hub is up and running and any clients can connect now.
SignalrTracer.ChatClient
I used Javascript to connect to SignalR server. Open Nuget Package Manager Console and enter
PM> Install-Package Microsoft.AspNet.SignalR.JS
It will install JQuery and JQuery extension of SignalR client. I created an index.html and added the code below.
<!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <script src="Scripts/jquery-1.6.4.min.js"></script> <script src="Scripts/jquery.signalR-2.1.2.min.js"></script> <script type="text/javascript"> $(function () { var connection = $.hubConnection('http://localhost:35144/chatserver'); var proxy = connection.createHubProxy('ChatHub'); connection.start() .done(function () { $('#messages').append('<li>Connected to chat</li>'); }) .fail(function () { alert("Could not Connect!"); }); proxy.on('flush', function (msg) { $('#messages').append('<li>' + msg + '</li>'); }); $("#send").click(function () { proxy.invoke('Publish', $("#sendTochatId").val(), $("#message").val()); }); $("#connect").click(function () { proxy.invoke('subscribe', $("#chatId").val()); $('#messages').append('<li>subscribed to chat</li>'); }); }); </script> <title></title> </head> <body> <label>Chat id</label> <input type="text" id="chatId" /><input type="button" id="connect" value="Connect" /><br /> <label>Send To</label> <input type="text" id="sendTochatId" /><br /> <label>Message</label> <input type="text" id="message" /> <input type="button" id="send" value="Send"/> <div> <ul id="messages"></ul> </div> </body> </html>
You might need to change the url of hubConnection marked in bold.
Once the connection established messages section will get appended with ‘Connected to chat’ message.
Enter a unique Id in Chat Id text and click Connect, open multiple window and connect with different chat id’s.
As you can see it’s a very basic and simple chat system based on SignalR.
Happy coding…
Compile SQLite for WinRT with FTS4 unicode61 support
I was experimenting with the FTS3/FTS4 feature of SQlite in a WinRT app. The default tokenizer ‘simple’ wont tokenize special character like $, @, etc. The solution is to use ‘unicode61’ tokenizer. Unfortunately the SQLite installer for WinRT8.1 comes without unicode61 tokenizer. I searched a lot to get a SQlite WinRT build that supports unicode61 but I was not lucky enough to get one, so I decided to build one myself.
Tim Heuer have a great post explaining about creating a build of SQLite for WinRT, I went through that and acquired all the tools to build, including the SQLite source code. I did exactly the same way Tim explained in the video, finally I got my build. But again it has the same issue, not supporting unicode61 tokenizer. I tried several builds and all these time I build with DSQLITE_ENABLE_FTS4_UNICODE61=1 and other flags I mentioned below.
After several attempt with so many permutations and combinations, I got it working. Tim’s video is a very good reference for building SQLite for WinRT. But if you want unicode61 support then follow the below steps, it’s same as Tim’s explained with some exclusion.
- mkdir c:\sqlite
- cd sqlite
- fossil clone http://www.sqlite.org/cgi/src sqlite3.fossil
- fossil open sqlite3.fossil
fossil checkout winrt,never issue this command, it will never include unicode61- Added this step to enable unicode61 tokenizer. Append the below config to Makefile.msc, you can see some config already existing append the below config to it.
OPT_FEATURE_FLAGS = $(OPT_FEATURE_FLAGS) -DSQLITE_ENABLE_FTS4=1
OPT_FEATURE_FLAGS = $(OPT_FEATURE_FLAGS) -DSQLITE_ENABLE_FTS3_PARENTHESIS=1
OPT_FEATURE_FLAGS = $(OPT_FEATURE_FLAGS) -DSQLITE_ENABLE_FTS4_UNICODE61=1
- Compile the code by issuing nmake -f Makefile.msc sqlite3.dll FOR_WINRT=1
I could build X86 and X64 using VS2012 command prompt for X86 and X64. But when I tried to compile for ARM I was getting some errors, I was fortunate enough to find a solution in StackOverflow. Followed that solution and I got the SQLite builds for X86, X64 and ARM.
I don’t want to spend time in creating a vsix package and install it in my machine, instead I took the backup of SQLite for WinRT version 3.8.5, in my machine the installed path is C:\Program Files (x86)\Microsoft SDKs\Windows\v8.1\ExtensionSDKs\SQLite.WinRT81. Then went into each folder and replaced the lib and sqlite dll with the respective builds I created.
Leave your comments if you have any questions.
Happy coding…
WinRT TextBlock with HyperLinks
I was doing some experiments with Window 8 app development. I wanted to show the text in a TextBlock with any URL as a clickable hyperlink, so that user can click on it and navigate to that webpage. We can do this easily via xaml as shown below. I googled a lot to find a way to do the same in MVVM approach. With no solution in hand I decided to come up with a solution using Attached Properties.
<TextBlock x:Name="textValue" TextWrapping="Wrap"> <TextBlock.Inlines> <Run Text="This is an example of how Hyperlink can be used in a paragraph of text.
It might be helpful for you look to"></Run> <Hyperlink NavigateUri="www.bing.com">bing</Hyperlink> <Run Text="for more answers in the future."></Run> </TextBlock.Inlines> </TextBlock>
I am using Caliburn Micro to bind the text, so the above approach will not suit my requirement. The solution I come up with is Custom Attached Properties. In the Attached properties I applied the above logic but via code. Have a look at the code.
using System; using System.Text; using Windows.UI.Xaml; using Windows.UI.Xaml.Controls; using Windows.UI.Xaml.Documents;
namespace Test.App.AttachedProps { public class HyperLinkedTextBlock { public static string GetText(TextBlock element) { if (element != null) return element.GetValue(ArticleContentProperty) as string; return string.Empty; } public static void SetText(TextBlock element, string value) { if (element != null) element.SetValue(ArticleContentProperty, value); } public static readonly DependencyProperty ArticleContentProperty = DependencyProperty.RegisterAttached( "Text", typeof(string), typeof(HyperLinkedTextBlock), new PropertyMetadata(null, OnInlineListPropertyChanged)); private static void OnInlineListPropertyChanged(DependencyObject obj,
DependencyPropertyChangedEventArgs e) { var tb = obj as TextBlock; if (tb == null) return; string text = e.NewValue as string; tb.Inlines.Clear(); if (text.ToLower().Contains("http:") || text.ToLower().Contains("www.")) AddInlineControls(tb, SplitSpace(text)); else tb.Inlines.Add(GetRunControl(text)); } private static void AddInlineControls(TextBlock textBlock, string[] splittedString) { for (int i = 0; i < splittedString.Length; i++) { string tmp = splittedString[i]; if (tmp.ToLower().StartsWith("http:") || tmp.ToLower().StartsWith("www.")) textBlock.Inlines.Add(GetHyperLink(tmp)); else textBlock.Inlines.Add(GetRunControl(tmp)); } } private static Hyperlink GetHyperLink(string uri) { if (uri.ToLower().StartsWith("www.")) uri = "http://" + uri; Hyperlink hyper = new Hyperlink(); hyper.NavigateUri = new Uri(uri); hyper.Inlines.Add(GetRunControl(uri)); return hyper; } private static Run GetRunControl(string text) { Run run = new Run(); run.Text = text + " "; return run; } private static string[] SplitSpace(string val) { string[] splittedVal = val.Split(new string[] {" "}, StringSplitOptions.None); return splittedVal; } } }
In the above code I analyze the text for urls after splitting it, you can also do it using RegEx. As this code is just for protyping, I didn’t pay much attention to the quality of it.
In the xaml page I refer this attached property as shown below.
xmlns:conv="using:Test.App.AttachedProps"
In the Textblock, bind the text via the attached property as shown below.
<TextBlock conv:HyperLinkedTextBlock.Text="{Binding TextVal}"/>
If the TextVal has any words that starts with ‘http:’ or ‘www.’ then the above Custom Attached Property will create a HyperLink other wise create a Run control and add it to the Inline collection of the TextBlock.
Happy coding…