Welcome to Oberon.
A virtual company on internet.

Oberon is an old name and it is the name of one of the moons around the planet Uranus.

Oberon is also the name of the king in the Shakespeare play "Midsummer night's dream".

And also a virtual company registred in Sweden for development and implementation of advanced embedded software as for example digital filters and realtime kernels used in wireless base stations, mobile telephones, measurement systems and other cool stuff like games.

Oberon's games are Adventures in Imagination, and using your brain for a change, for true Creativity in Your Imagination. "The empires of the future will be the empires of mind" (Winston Churchill, 1943).

Are you the emperor in your mind?

Currently we are selling some of our best double complementary Lattice Wave filter implementations.

We are using that type of filter in our own measurement systems for Road Survey Technology

Some of our customers

  • Nokia Mobile Phones Denmark
  • Ericsson Cables
  • Ericsson Radio systems
  • Cisco Systems
  • Qeyton Systems
  • Zarlink Semiconductor
  • Institute Optical Research
  • Vägverket

Speciality

We have specialized and use only DSPs from Texas Instruments.

Why? Because their inbuilt debugging tools. (JTAG)

Occasionally I do software work as I now have more of a leadership role these days but someone popped the question and said are you any good at communications and I assumed it was about human communications.

In Person

I started with electronics at the age of 14, and went for a four year technical gymnasium in telecommunications. After that I studied more and took a master's degree in Computer Science and Engineering. After taking the M. Sc. I started at Philips and worked a decade with realtime operating systems and computer language implementations and software tools and stuff.

I quit Philips and started the company Oberon where I worked as a software consultant specialising in digital signal processors (DSP), embedded systems and realtime operating systems.

I worked as a consultant circa ten years and during that time I got curious why some people are a magnitude or more better and more efficient in writing code than others. I started to read about Neuro Lingual Programming (NLP) and Ericksonian hypnosis around the year 2000 and got caught.

So I found yet another interest
- Using YOUR Brain for a Change.

Around 2003 I took an Executive Master of Business Administration degree,
and a Master's degree in Management and Organisation.

I like exercising my brain to keep busy. Some collect stamps, I collect academical titles.

Learn The Fundamentals

Web designers and web developers like Bootstrap because it is flexible and easy to work with. Its main advantages are that it is responsive by design, it maintains wide browser compatibility, it offers consistent design by using re-usable components, and it is very easy to use and quick to learn.

 

Learn Vanilla Javascript

The vanilla script is one of the lightest weight frameworks ever. It is very basic and straightforward to learn as well as to use. You can create significant and influential applications as well as websites using the vanilla script.



Adventure number One

1. Oberon's Highway and Airfield Pavement Technology
2. Project level Pavement Measurements of Highways and Airfields
3. Airfield Measurements giving data for the TAKEOFF application

Stockholm, Sweden.

We stopped doing measurements.

Why ?
The swedish road administration (Vägverket) wished a bid for measuring more than 20 objects and got a fair price for that amount. Then when the time came for measuring objects they only booked 14 measurements. Thus the calculation for 20 objects when only doing 14 objects were flawed. They thought they were smart and saved money because they got a lower price per object. True - but I stopped measuring for them (sneaky devils).

Axon1 is a one track road surface profilometer. The profilometer uses laser, accelerometer and distance for measuring road irregularities. It was made for making road profiles of short lengths, 10 - 50 kilometers. Longer stretches gets boring to measure. The rapid deployment profilometer is also for airfields and I was out measuring an airstrip at Arlanda, Stockholm. One of its hardware objectives where to have a total weight of less than ten kilos (20 lb.). Otherwise I can not carry it easily to and from the measurement car. The profilometer measured, repeatedly, road surface profiles with up to 200 meter wavelengths.

There was innovations also when making AXON1. The competitor used a optical trigger directly at starting point of the object for starting the measurement. Sometimes it did not trigger(hi hi).
Axon1 used another method - By getting a fixed point maybe 500 meters from object start (two measurement wavelengths) and measuring the number of wheelpulses from fix point to the object start. The number was saved. So, the car was started from standstill at the fixed location and started to save data immediately. At the place where the number of wheel pulses reached the saved value the true measurement starts. Of course the data before object start is also saved. It was much easier to start from a not moving car and prepare in standstill for a measurement than starting measuring at 80-90 km/h by triggering from an optical gadget. Axon1 cost was cheaper.

When developing it, I used an old record player that had some known notches simulating road cracks etcetra and it went round and round making repeatedly the same input data for the laser. The laser was mounted for measuring the height differences generated by the notches rotating under the laser (the rotating disc of the record player). And very easy to change input signal for the laser. (Put another Record on the record player).

The system was put on the car. The laser was calibrated by a known in height metallic cube. Accelerometer calibration was made by moving the car upp and down and also measuring the up and down movement with the, now, freshly calibrated up down distance given by the laser. The math is known how to turn distance into velocity and into acceleration. The up and down acceleration is known by the laser measurement and correlated with the accelerometer to negate the car's up down acceleration. Then the calibration value is known for the accelerometer: The competitor used a method of turning the accelerometer 90 degrees cancelling out earth's gravity.

AXON1 calibration
Start the accelerometer calibration - push the car so that it swings up and down a couple of times, and the software calculates the calibration value automatically and saves it. It only takes a minute to do. Thus, Axon1 does not need any way to get the accelerometer in and out of the box. Road measurement is a dirty business and sometimes wet.
Better to have a box containing the transducers that can be handled inside where it is nice and cosy.

Thanks to the fast development of DSP technology and the accuracy of new lasers and accelerometers it was possible for Oberon to put forward a system that measure, with high accuracy, longitudinal road surface properties. And this at normal road traffic speeds. One of the quality requirements for Oberon's self-developed measurement system, AXON1, was to have as few self developed parts as possible, and to buy parts "over-the-shelf". This requirement made it possible to hold the development cost low, whilst using already proven, high quality parts. Oberon integrates these hardware and software components into a system, and writes the necessary software for it. The effort needed for developing our own hardware system is thus less than in usual product development but it still exist, especially for the analogue parts. It put us in a position where we quickly could respond to customer needs. We made this product, fast, since we saw that AXON1 were a product that fitted road "object" measurement requirements.

Project level measurements
The essential difference between a project level road measurement and a road network measurement is that for a project level road measurement the correctness and quality of all road data is extremely important. The road data has to be compared with forthcoming measurements of the same project. The project level measurement is thus extremely dependent on repeatability since the measurement is to be provided during a number of years. This does of course imply that the equipment has to give the same measurement response from one year to another year. It is also of great importance that the system's transducers resolution is extremely accurate. The system's calculations must also be executed with the accuracy of floating point so that software algorithms do not distort the measurement data in any way, whatsoever. Road network measurements are more aimed to give a statistical view of the measured road network for the purpose of determining where bad road parts are located. The project level road measurement is to determine a specific object's deterioration over time. Project level data is scrutinized into its smallest parts where road network measurements give a country's overall view.


A generic square placeholder image with rounded corners in a figure.
Measurement car circa year 2000.

AXON1 was mounted in minutes on a lowered AUDI car with firmer suspension. The sensor box was not mounted mechanically with a special gadget onto the tow ball, BUT on the car's rear transport mount (loop). AXON1 was qualified by the Swedish National Road Administration, SNRA for measuring object measurements (project level survey). At december 1997 there were only two companies approved. During two weeks at the end of may, 1996, AXON1 measured more than 150 times, repeatedly, fifteen road strips at different velocities. The evaluation have been be done by VTI , Road and Transport Research Institute, and resulted in a report. The qualification was divided into two parts. One part deals with measuring road networks and the other part road objects. The concept with road objects is new. An object is a stretch of road that could be newly done. The idea is to measure an object and this repeatedly over time with an high accuracy equipment. It is then possible to measure if this object do or do not comply with quality requirements set forward by SNRA. Three road properties may be qualified for object measurements: crossfall, IRI and rutdepth. Other road properties are voluntary for object measurements. A very good description of IRI and longitudinal profile is available on internet at the University of Michigan.

AXON1, in its current version, measure and saves longitudinal profile every five or ten centimeters, twenty meters RMS values for the profile in different wavelength bands, the international road unevenness value IRI (calculation distance twenty meters). It does also compute the new texture value (MPD) according to the ISO standard. Road longitudinal profile can show if road irregularities with long wavelengths occurs. These could be due to movement of "soil" beneath the asphalt. There are also some new ideas of measuring unevenness by investigating the power spectral density from the road's measured longitudinal profile. Its measurement variables are :
· IRI (international road evenness index)
· RMS (four values for four wavelengths)
· Textur (MPD)
· Longitudinal profile (5 cm or 10 cm)
· Distance (in millimeter)

The main measurement device is a single laser SELCOM SLS5000 (measures distance by triangulation extremely accurate 50 micrometer resolution) that measure the road surface. With the aid of another transducer, a suberb accelerometer( 1 micro g resolution) it is possible to get rid of the car's movement up and down. The up and down movement is negated and added to the distance movement that the laser measure resulting in a signal where the car's vertical movement is cancelled. The data is saved and later post-processed. Data memory is cheap. The measurement data is processed, by powerful floating point digital signal processors and plotted live, graphically on a display. There is also a the wheel pulse transducer which have a resolution better than 1 pulse per millimeter.

A generic square placeholder image with rounded corners in a figure.
Measurement of IRI.


A generic square placeholder image with rounded corners in a figure.
Measurement of profile.

Adventure number two

Implementations of double complementary lattice wave filters

I am selling out some of the implementations of double complementary lattice wave filters that I use. Belive me when I say "when you get the hang of these type of filters you will never want to use anything else. "

But as it is very difficult to find nice and easy and good Lattice Wave C-code implementations I have put my implementations for sale.

I have done that and to make it easier for you you get real C-code that actually do the stuff and not only the in the sky abstract descriptions. I work with nitty gritty details and you may buy them. The abstract level documents complements this C code but the code gives you a low level perspective also to have something to "play" with.

You are most likely a smart guy and can do this stuff yourself but think of the time you save.

You get c-code filter coeficient generation, c-code filter implementation, c-code FFT code, c-code magnitude generation, and a c-code method for phase linearity.

Now, if you want to know about FFT:s get the original papers written by Cooley and Tukey for the FFT algorithm from about 1965. I bought them in an IEEE-book-sale a number of years ago just for the fun and it was well worth it.

This c-code implementation is a gold mine as I have worked more than 8 years with these types of filters in many possible configurations and as I was told impossible configurations of these filters.

I have done multirate filtering, filterbanks, decimation, interpolation, and lots of stuff. The implementation has never failed me.
But I Still keep some secrets even though this C code implementation shows what other people get doctorate degrees in.

The output of a lattice wave double complementary filter is simultaneously:
1. Highpass signal
2. Lowpass signal
This is the double complementary property and if you add together the two signals you have the complete magnitude again. Thus they are extremely good and efficient in filterbanks, multirate or not. The power in the signal is always kept intact even if it is splitted into different partitions and I think it is unique for filters.

Lattice wave double complementary filter does not "explode" and are extremely stable. It is thus possible to change the filter coeficients in realtime. On the fly, without that the filters behave strangely.

This is very good if you have adaptive filtering processes.

The Speed of an IIR filter and the stability of an FIR filter.

If you search in the internet for "lattice wave" you get a lot of additional information.
They build on Gazsi, L., ''Explicit formulas for lattice wave digital filters'', IEEE Trans. Circuits Syst., 1985, Vol. CAS-32, pp. 68-88.
You should really get this article, it is a beauty.
However as I implemented the filters in floating point DSPs I simplified the filter implementation algorithms even further.
The C code that you buy is not complicated, it is ANSI-C, and should easily compile on any system. It contains C code that generates filter coefficients for:
1. Butterworth
2. Chebyshev
3. Cauer

Look at the function headers for calling C-code functions in the following lines:

You have sampling-rate, cutoff-frequency, and filter-order.
Observe that the filter order is always odd : Filter order is 1,3,5,7.. thus an odd number.

lw_butterworth_coefficients(&butterworth[0], 20000.0, 5000.0, order) ;
lw_chebyshev_coefficients( &chebyshev[0] , 20000.0, 5000.0, order) ;
lw_cauer_coefficients ( &cauer[0], 20000.0, 5000.0, order) ;

And in addition You get one of my methods implemented, thus making linear-phase filtered data, that some measurement systems require ( or even telephone systems need).
I have a still better one, a linear phase filter without impulse response but that is my BIG SECRET that you can buy for more $$$ if you contact me.

This linear-phase filter method works faster than what FIR filters can accomplish given the same data. I presume continuous input data (for a number of milli seconds anyway).

The code filters an Impulse response for the three types of filters and generates text files that can be read by Excel.
The code also contains FFT and magnitude calculation so that you have text file outputs for the filter magnitude.
I use the Brigham method in the C-code, for FFT, where you use a 512 points complex FFT for 1024 points of real values.
It goes faster. It takes some time to execute the C code but that is because of that it creates text files on your system and writes into them and then because of that excel likes a "," instead of a "." for like 0.00345 it converts all "." to "," and that also takes some time. Excel likes 0,00345 for decimal number.

The output is thus:
1. six NON linear filtered impulse responses
2. six linear filtered impulse responses
3. six magnitudes from FFT of NON linear filtered impulse responses
4. six magnitudes from FFT of linear filtered impulse responses

You can see the filter magnitude for:
Butterworth both lowpass and highpass
Chebyshev both lowpass and highpass
Cauer both lowpass and highpass



Here are the magnitudes of the 3 filters FFT:ed phase linear filter response giving 6 outputs
And here is a link to excel file.


The LINEAR PHASE impulse response of both low and highpass filtered impulse that is later inserted into the FFT.
Excel file


The next image contains both impulse responses of a Butterworth Linear phase, and a NON linear phase.
The first impulse is the linear phase filter and you can see how nice and symmetric it is.
The non-linear filter tilts, and you can see that at point 14 the pink point is higher then at point 16. This is due to the unstationary part of the difference equation that makes the filter. This is not seen for the linear phase filter as it is only running in the stationary part of the difference equation.
That is what i believe anyway and you can see the difference and this difference is vital when building measurement systems. Because if you are not measuring the system but the filter implementation response it is not the same.

I learned me that the hard way as I built road measurement systems and were actually out on the road and measured the bumps, by hand, decimeter for decimeter.

It was actually nicer when a measured bump was exactly at the physical location (for a phase linear filter) than being shifted in the measurement with two meters due to non-linear phase errors in the filter. Rather difficult to explain THAT to the road builders and I was very glad when I stumbled upon this solution.
Maybe it is alsa zero-phase, not only phase linear, I don't know.

A real thing like a small stone with very spiky edges, in the asphalt, is seen in the filter as an impulse, and it has to look the same in the output data also. and not shifted by phase-distorsions in the filters.



Impulse responses of a Butterworth Linear phase, and a NON linear phase.


Summary

Thank you very much for being with me so far and reading about the C code module.
I have put a lot of effort in the implemenation to make it so small and usable as possible.
Indulge yourself and have fun!!

OBSERVE THE NEXT LINE IS WHERE YOU "click" to get the code.
click and download C CODE

Adventure number three

Problems in Fileserver - Philips Netherlands

During the time at Philips, Stockholm, I was involved in implementing C in a 16 bit environment. The C – library of functions and application interface in-between the real-time operating system and compiled code from C compiler, linker, and part that I don’t remember the names of.

Philips Telecommunicatie in the Hague, Netherlands, developed a fileserver so that many connected systems could share a big disk.

The fileserver “exe” file that was loaded into the computer memory was segmented into several runtime code-segments that were swapped into memory when needed. The fileserver “exe” were too big to have in the program memory in full. The fileserver had executable modules and it was the Linker software’s task to create the executable modules and insert code to switch in and out different executable modules. The people in the Hague had problems with the fileserver that they had tried to solve for several weeks. Philips, Sweden, had the responsibility over the operating system and I got the task to go to the Hague to help solving the problem.
I arrived a Wednesday and was shown what they had done and how much work they had laid down to find the problem.
At the evening I went to the hotel and was thinking. Next day, on Thursday I collected more information about how the error occurred and showed itself. Back again to the hotel on Thursday evening.
During the night I got the idea that it had to be a memory problem.

They had read their application with magnifying glass inside out in minute detail so it could not be there the problem was.

Next morning, Friday, I found the problem. It was a executable module that was needing more memory than what the linker was made to handle. I changed the linking process and software parts in the executable modules so that all code modules were smaller than the linker’s maximum allowed memory for a code module. The fileserver worked.

I asked Philips’s secretary to book home travel for me later on, the same friday. It was weekend and I could go home.

At lunch the Philips’s boss came and was very angry that I had booked a travel home to Sweden instead of staying and working with the problem.

Boy, was he in for a surprise when I told him that the problem was solved. I can still remember his reaction when I told him it was solved. He did not believe it but it was easy to show.

I went home and a short time later I got a nice long assignment with Philips Telecommunicatie in the Hague that was several months long (year 1987). I wrote software with people in the Hague.

Adventure number four

Optical fiber splicer - largest Telecom company, Sweden

The system was a fiber optic splicer for optical fiber ribbon. Fusion splicing is the process of applying heat to fuse together optical fibers, which minimizes insertion loss and back reflections in the fused component. Twelve (12) fibers in a flat ribbon was spliced simultaneously in one go. One problem is to calculate how well the splice is in regard to attenuation of a faulty splice. An electrical arc that has a very hot temperature heats the fiber ribbon. A glassfiber has a core and a surrounding protection(glass). The fibercore has the excellent property of showing itself as it glows more intense than the surrounding protecting glass in the electric arc.

Two cameras are mounted at a 45 degree angle and takes a video and photos of the splicing process. For calculating the attenuation for each splice, a mathematical calculation is executed by using the video and photos as input.

The input is translated to :
Computer says: YES
or
Computer says: NO


If NO then the fiber ribbon has to be spliced again.

The operator cannot see the splicing process. The operator sees an image of the fiber ribbon on a display before and during splicing.

In the project start phase (before I got involved) it was planned that the image processing needed a powerful floating point doing realtime imaging processing of the input and the choice of DSP was TMS320C40.

I had worked with Texas DSPs for several years and got involved in making software for a Texas TMS320C40 floating point signalprocessor (DSP). I had already done work with TMS320C40(DSP).

First task was to find an excellent realtime system for the TMS320C40 DSP.

Another task was to process, in realtime, incoming video from two cameras “looking” at the same object but at a 45 degree angle. Realtime image processing to recalculate the incoming image by making the closest fiber look smaller and the farthest away fiber look larger. The result should be that on the display the object should look the same indifferent of which camera the operator chose to look at the fiber with.

The optical perspective from incoming camera should be changed. Both videos should display the ribbon in a similar way. It should look the same even though the cameras was having a perspective difference of 45 degrees.

The fibers in the fiber ribbon looks different in size depending on how close the ribbon is to the camera. The closer the ribbon is the larger it looks compared with the ribbon that is farthest away. Every fiber on the display should look to have the same size indifferent of the optical distorted view sent in the camera. The closer ribbon should be calculated to look smaller and the farthest away ribbon should be calculated to look larger and the result will be fibers in the ribbon with similar size.

Here was the innovation I did.

Texas Instruments had with the TMS320C40 done a “workhorse” for parallel processing and it had Six Communications Ports and Six-Channel Direct Memory Access (DMA) Coprocessor. It was easy for the hardware people to select one communication port per camera and then I used one DMA coprocessor to get the video data into memory line by line from the camera. Easily done. So in the memory was concurrently images from each camera. Line by line in memory. The DMA is easily reprogrammable as it was done for parallel processing. The display wishes data for a number of lines sequentially sent.

The implementation plan was to process the image in memory and created an image to be displayed to the eye. It does not have to be exact as the image giving the data for good or bad splicing is already in memory. What I did was to calculate the average number of lines that contains a splice from the 12 fibers. Say average is 15 lines. Say the closest fiber has 20 lines and the farthest away has 8 lines. OK, take away 5 selected lines from the closest(20) fiber to be 15, and duplicate selected lines from the farthest away(8) to be 15 lines.

As the C40 has programmable DMA, that auto-initialize without CPU intervention. C40 supported to automatically, in its on-chip hardware select which lines that are going to be sent to the display and in which sequence. The solution only needed One(1) calculation to create one table that is constant during the whole splicing process. Thus, resulting in that the CPU did not do any realtime image processing for the display.

That was easy.

Adventure number five

TDMA power saver - largest Telecom company, Sweden

In the largest telecom in Stockholm I got an assignment between two projects.

The task was to implement an energy saving solution for TDMA standard base-stations by powering down time slots for idle slots (slots with no active telephone call).

I got new work mates that in reality worked with other projects.
A TDMA signal has time slots for every telephone conversations.

The current situation was that power was on continuously for all slots disregarding if there was an active telephone connection or not.
To only turn off and turn on power in a time slot would generate a “square” signal in output power. And would disturb antennas and other stuff.

I had several years’ experience with digital signal processing and implemented a successive power down and successive power up with applying a Kaiser window on the time slot. That made the energy spike go away.

It worked, It took one-two months for me to implement (Nothing is easy in embedded integer DSPs).

I went to my ordinary project leader and told him it was ready.
He looked at me with a big surprise and said that he had been told it was impossible. He had parked me in that task between two projects.

Well, it got implemented and later used in TDMA base-stations in the USA. (Texas Instruments DSP)

What can I say ?

Adventure number six

The flaw of modular programming - largest Telecom company, Sweden

In a big telecom company in Sweden, I wrote software for Texas Instruments DSPs. I had written software for similar DSP in my own projects and for a small company doing road survey measurements.

The big company wrote C code modularly in files containing very little code. One file - one function C – A module. Every module(file) had two file-includes for globals and locals. Every module after compiling generated an object file. The object module were later used by a linker for generating a load module. Three software files for each object file.
The customer had a lot of people using expensive SUN workstations for compiling and developing software.

After a while I could not understand a strange situation. The situation was, that I, at other customers, and me myself, developed loadmodules which had the same loadmodule size that the big customer had. Thus, the big company’s loadmodule in size did not contain more code than what I was used to handle myself. At the big customer I only could see a subset of the many small modules and I did not understand immediately that the massive number of modules created a problem.

In my own development and for another customer I created a loadmodule, I could compile C code ( Texas DSP C compiler) and linked the object file creating a loadmodule in a few minutes.

In the big company it took extremely long time. Could be hours.

I got a big “AHA” moment. And it got me to always have many C functions in one software file, often everything in one software file (could be 6.000 – 10.000 lines)

The big company with maybe 2000 software modules(files), each including 2 include files, and generating one object file each. Crazy. 2000 + ( 2*2000) + 2000 ) That is 8.000 opening, writing, saving of files. It becomes extremely time consuming with all these file opens/closings/writes etcetra for the expensive SUN stations. It was in a time where Hard disks were very slow. Then the linker starts to read the 2000 object modules, which is much slower than reading only a few object modules.

“AHA-moment: By having a lot of code in one file, can make the time for generating a loadmodule 100-1000 times faster. The compiler reads the software file in one read. (Chews on it with heavy CPU load without time consuming file openings/writing/closings). Compiles it, and generates one object module.

Everything should be made as simple as possible, but no simpler.” This is one of the great quotes in science. Coming from Einstein

Adventure number seven

NLP - Neuro lingual programming

According to the book:

Nikola Tesla developed his ability to visualize to the degree that the only laboratory he needed in order to develop his incredible inventions was the one in his imagination. Mr. Tesla was said to imagine different parts of his machine's so well that he knew in is imagination which parts would get more wear and tear than other parts and change them in his imagination to be more robust.

Which nuts and bolts needed to be stronger than others as elecrtrical power rotated in machines.

So when you have written software - Imagine the software's function, procedures etcetra in your mind in 3D and that you as an observer (a little bit of Einstein here) are walking inside the 3D model of your software. Look at the structure of your software, Which parts that connects incomming data and how it is structured, how it is built.

All in your mind.

Change the system's structure in your mind and see how it looks like afterwards. Look at it from the inside of the software structure, walk into it, see it from the inside, from above, from under, turn it inside out and look at it again. Start inspecting with small parts of the software system. If you have problem understanding the observer's model, maybe begin to think of you entering a house, building, and imagine being in a house. People are "data" walking in and on different floors of the building and you can look into the buildings house. Change the flow of people by altering the location of rooms so that people does not have to move so far in he building before they are ready to leave. As you already are thinking of people moving you have changed the 3D model to a 4D model by the movement of data. So you have started a learning to have a 4D model of your structure. In your mind. And you can stop the flow of people (data) go inside the house and inspect what it looks like when everything is on hold. Like setting a software "trap" and you look at variables and call structure. And you can change the structure in your mind and let the people move again and see how the new system works. Does it work on other stuff ? Yes, if you have already thought about it by creating the question and asking yourself that question then the answer is of course, yes.



About

Kenneth Blake

Kenneth has three academical merits:

  • Master of Science in Computer Science and Engineering (Linkoping Univ.)
  • Masters degree in Management and Organisation (Stockholm Univ.)
  • Executive Master of Business Administration (Stockholm Univ.).

Kenneth Blake

Kenneth is also a qualified NLP Trainer, Master NLP Practitioner. Trained to the highest standard by Dr. Richard Bandler (Creator of NLP), John La Valle (President of NLP Society), Paul McKenna (Europe's leading Hypnotherapist), and Michael Breen, and has therefore learnt from, and modelled, the most respected people in the business.

Kenneth Blake

Kenneth has also been in Bryssels several evaluating research proposals under the fifth, sixth and seventh framework programme of the European Community for research, technical development and demonstration activities.

Kenneth's CV

Contact Info

  • Company: Oberon Data och Elektronik AB
  • Main Location: Virtual Organisation in the cloud
    (on internet)
  • Email: ken @ oberon.se
  • Secure Email: ken.blake @ protonmail.com
    You need to make, and send from your own Protonmail account for the email conversation to be secure. It is very easy to make a free account.

The information provided by you will be treated confidentially and it will not be passed on or sold to any third parties. We may in the future wish to contact you to let you know about news, upcoming events and special offers.

No cookies on this site.