Robert Polding Robert Polding

Exploring Innovation at the Tech Venture Bootcamp

IE University's School of Science and Technology launched an exciting initiative known as the Tech Venture Bootcamp in January, and the second edition took place over a weekend at the start of April.

This initiative stands out as an example of how breaking down traditional silos in academia and industry can lead to significant breakthroughs in technology and entrepreneurship. Driven by the Berkeley Method of Entrepreneurship and co-developed by Ikhaq Sidhu, the Dean at IE Sci-Tech, the bootcamp is designed to cultivate a unique ecosystem of innovation and collaboration.

Synergies

The core philosophy of the Tech Venture Bootcamp is to foster interdisciplinary collaboration. The program aims to generate new ideas and transformative solutions by bringing together diverse minds. A typical team within the bootcamp consists of technology students, business students, and one executive, researcher, or entrepreneur. This blend of expertise and perspective ensures a unique problem-solving and venture creation approach. Applicants are encouraged to join either with a pre-formed team or as individuals open to forming new alliances.

Opportunities

The boot camp offers opportunities for all students at IE University and beyond. Participants are given a fast track to the Venture Lab, facilitating a transition from idea generation to venture creation. Additionally, the program serves as a pathway for students to engage in new capstone or research projects, further enriching their academic and professional journeys.

Engagement

The initiative extends its reach beyond the student body, inviting executives, entrepreneurs, software developers, faculty, and mentors to contribute their ideas and expertise. This collaborative environment enriches student-led ventures and provides a platform for professionals to meet and recruit like-minded innovators. The Tech Venture Bootcamp acts as a melting pot of ideas where industry and academia converge to shape the future of technology and entrepreneurship.

A Successful Second Edition!

The latest edition of the Tech Venture Bootcamp was a success. Numerous teams developed innovative working prototypes and refined business plans. The success stories emerging from the boot camp show the value of interdisciplinary collaboration and hands-on learning.

Looking ahead, the next edition of the boot camp, scheduled for September, promises to be even more exciting. IE University School of Science and Technology will partner with Ripple, a leading player in fintech and digital currency. This collaboration will create projects centered around financial technology and digital currencies, offering students and participants a chance to delve into one of the most dynamic sectors of the tech industry.

The Tech Venture Bootcamp is a gateway to innovation, collaboration, and real-world problem-solving. Through its unique approach and partnerships, the bootcamp is creating the entrepreneurs and technologists of tomorrow, ready to make a meaningful impact on the world.

Read More
Robert Polding Robert Polding

The Hidden World of Internet Ports

Internet ports are like invisible doors that help computers talk to each other. Most people don't think about them, but they're really important for making the internet work smoothly. There's a great story on SSH.com about how a special door, called port 22, was chosen for secure messages. It's a cool look back at the early, wild days of the internet.

Imagine the internet as a huge building with lots of doors. Each door has a number, and each number is for a different kind of message. Some doors are for emails, some are for websites, and some are for other stuff. This system makes sure that messages go to the right place, like making sure a letter gets to the right mailbox.

The story about port 22 is interesting because it shows how people had to agree on which door to use for secure messages. Back then, the internet was new and people were figuring things out as they went along. They had to talk to each other, make decisions, and sometimes just go for it. Port 22 is just one example, but it shows how much teamwork and creativity went into building the internet.

For anyone curious about how technology works or how the internet was made, this story is a must-read. It's not just about the technical stuff. It's about how people worked together to make the internet a place where everyone could share information safely. It's a reminder of the early days when the internet was like a big experiment, and everyone was learning and exploring together.

In short, internet ports might sound boring, but they're actually part of a big, exciting story. The tale of port 22 is a peek into the past, showing us how the internet grew up. It's a mix of smart ideas, teamwork, and a little bit of adventure. So, next time you use the internet, remember there's a hidden world behind the screen, full of stories waiting to be told.

Read more…

Read More
Robert Polding Robert Polding

A Review of FOSDEM 2024

I had a great time attending FOSDEM 2024 last week. It was impressive to see presentations from big names like RedHat, Mozilla, and Google. Each session offered a lot of insights into where open source policy and development are headed.

It’s clear that open source has a bright future, but we also discussed the challenges, especially around EU privacy regulations and their impact on commercial software development. These conversations were crucial for understanding how we can navigate these changes while continuing to innovate.

The best part was the sense of community. It was amazing to be surrounded by so many smart people who are all passionate about open source. The event was organized flawlessly, thanks to the hard work of all the volunteers. A big thank you to everyone I met for the great conversations and for sharing your ideas.

Leaving FOSDEM 2024, I felt more inspired than ever to contribute to the open source world. A big shoutout to everyone who made it such a great experience. Let’s keep pushing the boundaries of what we can achieve together in open source!

Read More
Robert Polding Robert Polding

FOSDEM 2024

Join Me at FOSDEM 2024 – The Ultimate Meetup for Software Developers!

Hey there, tech enthusiasts! This weekend, I am heading to the FOSDEM conference in Brussels, and I am excited!

What is FOSDEM?
For those of you who might not be familiar, FOSDEM stands for Free and Open Source Software Developers' European Meeting. It's like a grand celebration of all things tech, where software developers from across the globe come together to connect, exchange ideas, and basically geek out over the latest innovations in the field. The best part? It's a totally free event, and you don't even need to register – just show up and dive right in!

Why FOSDEM?
Imagine this – thousands of like-minded individuals, all united by their passion for free and open source software, converging in one place. Whether you're a seasoned developer with decades of experience under your belt or a fresh-faced newbie eager to soak in all the knowledge you can, FOSDEM is the place to be. The opportunities for learning and networking are simply unparalleled.

Let's Meet Up!
Now, here's the really exciting part – if any of you wonderful folks are also going to be at FOSDEM this weekend, I'd love to meet up! Whether it's grabbing a cup of coffee between sessions, attending a talk together, or simply exchanging ideas and insights, it would be fantastic to connect in person. If you want to meet up, email me at robert@polding.eu.

Can't wait to see some familiar faces and make new friends amidst the buzzing crowds at FOSDEM. Let's make memories and geek out together!

I’ll be posting a full report next week.

Read More
Robert Polding Robert Polding

The End of the Nanometer Era

As the nanometer era in semiconductor manufacturing nears its end, industry leaders like Taiwan Semiconductor Manufacturing Company (TSMC) and Samsung are progressing towards more advanced chip technologies. TSMC is reportedly planning to establish a 1nm fabrication facility in Taiwan's Chiayi Science Park, marking a significant leap from the current 3nm and upcoming 2nm technologies.

While cutting-edge chips are advancing to smaller nanometer processes, it's expected that simpler microcontrollers and legacy integrated circuits will continue using larger, more traditional process nodes for some time. Samsung is also advancing with its 3nm production and has plans for a 2nm process in 2025, while Intel's next-generation 20 angstrom technology (about two nanometers) is expected to debut this year.

The terminology used to describe these advancements, however, has evolved from a literal to a more symbolic meaning. Previously, nanometers referred to the physical gate length of planar transistors, but with the adoption of FinFET transistors around 2011, this metric became less representative of actual transistor size. Intel's recent rebranding of its process technology, changing its 10nm to “Intel 7” and 7nm to “Intel 4”, underlines this shift.

While partly a marketing strategy to align with competitors like TSMC and Samsung, it also reflects a broader industry trend where nanometer measurements are more about indicating relative improvements in transistor density rather than precise physical dimensions.

This shift complicates comparisons between different foundries' technologies, as there's no standardized way to equate one company's process tech with another's.

Read more…

Read More
Robert Polding Robert Polding

The Tech Bloodbath

Last year was, by all accounts, a bloodbath for the tech industry, with more than 260,000 jobs vanishing — the worst 12 months for Silicon Valley since the dot-com crash of the early 2000s.

In the first four weeks of this year, nearly 100 tech companies, including Meta, Amazon, Microsoft, Google, TikTok and Salesforce have collectively let go of about 25,000 employees, according to layoffs.fyi, which tracks the technology sector.

Then what is driving it?

"There is a herding effect in tech," said Jeff Shulman, a professor at the University of Washington's Foster School of Business, who follows the tech industry. "The layoffs seem to be helping their stock prices, so these companies see no reason to stop."

Read more…

Read More
Robert Polding Robert Polding

Generative AI Bootcamp

Congratulations to the winners of the Generative AI Bootcamp last weekend!

They worked hard and showed great skills in technology. Their projects were not just innovative but also very creative. It's times like these that remind us why we teach - to help create future leaders in technology. I hope their language learning app, Parrott, becomes a reality as I want to use it! Count me in as a beta tester.

A big congratulations to all the students who took part and won. You’ve really did us proud.

Keep up the great work!

Read more here

Read More
Robert Polding Robert Polding

Bing’s Bad Bet

Microsoft's Copilot (previously known as Bing Chat) made a significant bet to outshine Google in the search industry.

However, the outcome has been less than stellar, with Copilot seeing a marginal increase of less than 1% in users compared to Google, which is still the dominant search engine.

It appears that "Copilot" might become the "Metaverse" of GenAI. It is destined to be remembered as a commercial disappointment.

On a brighter note, OpenAI itself continues to thrive!

Read more here

Read More
Robert Polding Robert Polding

Chaos at OpenAI

I am thrilled to announce the publication of my latest article, which delves into the significant changes at OpenAI.

The piece examines the events leading to Sam Altman's dismissal, the shift from open-source advocacy to a profit-driven approach, and the impact of Microsoft's investment. It discusses the ideological clash within OpenAI, highlighting how commercial interests have overshadowed the original mission of fostering beneficial AI.

This article offers a critical perspective on the evolution of AI governance and its implications for the future of the industry.

Read the full article: https://lnkd.in/d8wzkKiK

Read More
Robert Polding Robert Polding

Meet my GPT

As part of my research, I've been working a lot on developing custom chatbots. I've developed a new chatbot that is trained on my class textbooks and my personal information, and I'm at a stage when I'm ready to unleash it to the public!

The ability to easily create GPTs is one of the best developments in the GenAI space. Before it was necessary to write a fairly complex Python script to achieve this. However, thanks to the latest features it has become as simple as a few prompts and file uploads.

Click here to chat with the virtual me. Please be kind!

Read More
Robert Polding Robert Polding

Journeying into the Metaverse

Is the Metaverse just a video game? Is it all just hype?

No matter what your opinion is on this fascinating new technology, there seems to be no stopping the hype engine.

The dean of my university has made the following video, which explains the technology really well:

https://www.youtube.com/watch?v=wiGNpwGQ-zE&t=51s

The Metaverse is a topic that seems to really divide opinion. I know some colleagues and students who are 100% in on the idea, while others are really dismissive and see it as a technology that is going to be only used by the gaming community.

No matter what your opinion is, there is a clear opportunity here. There will be a captive audience using this technology, and with that come great possibilities for advertising and creating new experiences that can capture customers.

Read More
Robert Polding Robert Polding

Blockchain without destroying the planet

Yesterday there was some great news. Ethereum is switching from Proof of Work to Proof of Stake. This means it will use 99% less power. My most significant criticism of blockchain has always been that it is a liability to the planet. It produces as many emissions as Vietnam right now, and there are coal-fired to support mining.

This means that most of the decentralized apps we use and many other services will become much better for the planet. However, there is still one elephant in the room. Bitcoin is still using Proof of Work, which is about double the size of the Ethereum network. So, for now, we will only see a modest reduction in the catastrophic pollution caused by this technology. Personally, it means that for the first time, I will entertain using this for more than just experimentation. I now feel a barrier has been lifted in adopting Etherium technologies, and I will be willing to participate. As someone who has spent the best part of 20 years campaigning for the closure of filthy power stations, I certainly won't be investing in Bitcoin!

Read More
Robert Polding Robert Polding

Back to normal university life

This week saw the return of university life that reflected the pre-pandemic times. Two thousand new students started at my institution, and there were no more restrictions, and it was great to see.

The whole learning experience has been poor for students since the introduction of online learning. While it looks promising on the tin, the result of "hybrid" learning has been simple: students were performing worse and not learning what they came to the university to learn.

Studying from home is a bad idea. There are distractions, and it makes learning less of an experience and more akin to watching a TV show or documentary. Students never participate and usually look to be in another world rather than focusing on lessons.

It was an interesting experience and experiment, but the conclusion is simple: face-to-face is far superior.

Read More
Robert Polding Robert Polding

My Programming Journey

A personal insight into my journey as a coder, entrepreneur and professor.

Introduction

My life has been full of technology. From the first moment I experienced a coin operated arcade, I was taken aback by the wonder and amazement of the progress we were making technologically. When I was born, in the late 1970s, there was little in our home in terms of tech.

But over the following years that changed tremendously. This article will explain how my life was impacted and transformed, and how it all culminated in me studying a Ph.D. and finally settling as a computer science director at a university.

Before even considering this though, I want to rewind to the 1980s, when my interest in all things technological began. Everyone, in my personal experience, has had a different programming journey. I hope this story will bring back happy memories for those who had similar experiences. For those who were not around in the 20th century, it will give a glimpse into this amazing time for technology.

First Experiences

Programming came into my life when I first got a computer. In the early 1980s, I was bought an Acorn Electron that unfortunately was faulty. The replacement also had a string of issues which meant we had to take it back. That was computing back then - it only kind of functioned and certainly wasn’t reliable.

After this bad experience, I chose another type of computer and got a ZX Spectrum 48k. The first thing that greeted me upon plugging it into the family TV was a prompt asking for commands. Initially, the only commands I typed were to load tapes and play games, but the strange flashing cursor begging for input caught my attention.

When browsing my local library, which was huge with thousands of books, I was naturally attracted to the technology and IT section. To my absolute delight, I found books relating to my new hobby. When I got home I had no idea what the cryptic codes in the book were. After deliberating what to do, I decided to try typing in the codes into my new computer. When I had finished typing one program, I tried running it and to my complete surprise, a simple graphical demo sprang out of the screen, drawing an endless tunnel of colored squares on the TV. This was my first taste of coding and I can still remember being in a state of elation, feeling like I’d truly achieved something amazing.

Being of limited means was a blessing because buying games regularly was an impossibility. Instead, I relied on books from the library and started typing longer and more complex programs and games. I quickly learned that when they didn't run, I had to find the errors and retype the lines that were not copied well. I must have gotten through two hundred books in the four years that I had the Spectrum and I had saved all the programs to audiotapes so I could re-load them whenever I wanted. I was the envy of the neighborhood, everyone my age wanted to come round and experience the collection I had amassed.

In these early years of my programming experience, I had by no means become a master of the trade. I barely understood most of the code I was typing. However, the increasing familiarity I was gaining with the code was the key to learning for me. I began to understand that “for loops” and “if statements” were things that allowed a coder to control flow and logic. While I was not able at this stage to write them myself, because I had no idea about how to create an algorithm, I did learn the mechanics of these commands while correcting and copying code from books.

In 1988, I got the Christmas present of my dreams, an Amiga A500. This was a huge expense for my parents at the time, I remember my father saving all year for it. This not only had floppy disks which could load programs in a matter of seconds, but also introduced me to multitasking, 32-colour graphics (in addition to a HAM mode that allowed 4,096 colors), and stereo sound. This was my programming battle station for the next two years and proved to be a truly amazing tool that enabled me to learn music and video production, 3D modeling and a range of different programming languages.

Teenage Coding

The first programming language that I encountered on the Amiga was the version of BASIC that came with the operating system. This was instantly familiar because I had been programming BASIC on the Spectrum, and it allowed me access to the much more powerful Motorola 68000 - albeit with a big performance hit compared to other languages on the platform. The development environment was awful (Amiga BASIC was designed by Microsoft) but it allowed me to convince myself that I would be able to learn to code on the Amiga.

I spent a lot of time learning the wonders of a true graphical user interface. I also ventured into the realm of scripting. In 1992, I traded in my trusty A500 for an A600HD, my first machine with a hard disk (a whooping 40MB). This helped because it improved loading time and allows me to store all my code and applications. It also introduced me to Workbench 2.0 which had ARexx, a scripting language that allowed me to have a lot of control of the operating system. In no time, I was creating bootable disks with custom scripts and menus and this allowed me to develop a basic understanding of operating systems that was one of the skills that proved most useful in my career as a developer.

Other programming languages were also appearing on the platform. The two that caught my attention at the time were still based on BASIC but had much more powerful features than the older Microsoft BASIC: AMOS and Blitz Basic.

AMOS was a language that allowed advanced graphics, sprites, and games to be created without having to understand C or assembler. It also allowed me to integrate simple 3D wireframe graphics into my code and it was the first language that gave me creative freedom. Initially, I used it to develop text-based adventure games and educational programs. At the time, I was also studying French and I created a system for practicing and learning verbs. As an experiment, I sent a demonstration version of my software to a public domain (PD) software publisher and after a few modifications, they agreed to market it. To my surprise, people called and wanted to purchase the full version. This was the start of my commercial career in the software industry.

The other language that I learned at the time was Blitz Basic. This integrated much more tightly with the opening system and allowed me to build applications that looked more professional. It was my tool of choice after I picked up an even more powerful computer at a trade fair, the Amiga 3000. For the first time, I had a computer that was upgradable and using the money earned from odd jobs at restaurants and supermarkets, I managed to buy a 24–bit graphics card (the Picasso IV) and I started learning much more intensive coding techniques.

I was pushing the limits of BASIC at this point, and decided to make a bold step of learning 68000 assembly language. This was a huge challenge but one I definitively do not regret undertaking. After months of study and much frustration, I managed to start coding games that could take advantage of the chipsets that I was using. I got as far as developing side-scrolling platform games for myself but the inevitable shift in the computer industry towards internet-enabled architectures and cloud-based systems meant I soon realized that learning this type of assembly language would not have a bright future.

Nevertheless, learning a low-level language allowed me to develop an understanding of how software and hardware really interact and this opened many doors for me in my future career.

Doing it my way

In 1995, I had the fortune and amazing luck to get week-long work experience at my favorite computer magazine, Amiga Format. While there I did not just have an amazing time drinking beers and playing Sensible World of Soccer with the staff, but I was also allowed to have a trial writing for them.

I managed to prove to them that I could write and started a career as a freelance journalist. This meant my time was absorbed in other types of work for the next few years. I was sent hundreds of floppy disks to test each week and had to write up a monthly section reviewing and promoting the best public domain software.

While I still had an interest in programming, my days were taken up writing articles in the last days of the magazine. However, the days of the Amiga were numbered (thanks to Commodore completely mismanaging the company) and the Internet was emerging as a platform that was beginning to dominate. I couldn't help but notice this trend and while many people moved onto Windows as their platform of choice, I saw the potential of the fledgling Internet.

University Years

While a was still writing for Amiga Format and in my initial days at university, my focus changed and I was experimenting with other platforms. At university, we were taught the office administrators side of computing - Windows, Office, VB script and a plethora of proprietary software. We were also introduced to servers and UNIX and the basics of web design. I studied chip fabrication, hardware and the laws of computer science. Programming once again became the focus of my study, as it was the area that interested me most. I was more interested in becoming fluent in C and developing my skills in operating systems than learning how to create Excel automations, and this moved me away from the admin-focussed world of the Windows platform and drove me towards open source, and in particular, Linux.

Being able to modify the core of the operating system and access the inner working made learning C worthwhile and interesting. Contributing to open source also gave me a way to improve my coding skills and achieve something at the same time. It was an exciting time for open source at the end of the 20th Century: the Internet was open for the taking and large corporations like Microsoft were desperate to win dominance. They lost, of course, and open standards won.

Another thing emerging at the time was web development, and because I had an Apache server sitting at home, JavaScript became the focus of my development life. I learned everything I could about front end development and then started to learn back-end development with Java. My first major project was an open-source CRM and this became the focus for the next two years of my life.

I had a brief brush with hardware and engineering in my first job at British Telecom. I worked for two years in their network design department. But, my passion was in the open-source software that I was working on. I soon had some good fortune though, and I was offered a grant to study a Masters degree in Information Systems. I wrote to the Arts and Humanities Research board in the UK and explained my involvement in Open Source and how it was my dream to enter into research.

I spent the next year putting what I had learned myself (Java and C) into a range of practical projects that I found through the university. This caught the attention of one of the professors there, and they asked me if I would be interested in doing a Ph.D. which would involve development and prototyping. I jumped at the opportunity and spend the next 5 years developing a system in PHP/JavaScript and researching e-commerce and SEO.

Coding for a Living

While I was completing my Ph.D. I also started my first foray into the commercial world of development. I was offered the opportunity to form a company that would work on prestigious museum projects. I was the CTO and had the opportunity to design and develop location-aware mobile devices (these were based on PDAs).

This project meant I could truly use my skill set, particularly C, to develop something completely new. Since no one had done it before, I had quite a challenge. I managed to find a company in China that could provide active Radio Frequency Identification (RFID) tags and went about designing software that could make use of these. When I received the tags, however, there was minimal documentation, and most of it was in Chinese. After finding a translator, it became clear that this would be a huge task.

Mainly thanks to my experience with C and assembler in the 1990s though, it was not an impossibility. Despite the steep learning curve, I managed to work out how to connect to the RFID tags and create an application that could be used in a museum by a visitor. The key selling point of the system was that, as a visitor walked around, the device would show content and tell a story based on location (this was years before GPS and mobile phones made this type of system commonplace).

This work was something that allowed me to develop both business skills and my coding abilities, and I felt confident that I would be able to move on and achieve a lot in the coming years. After completing the projects that I was involved in, I decided to cut my ties with the company I was working with and focus on completing my Ph.D. I achieved this eventually and reached a crossroads in my career.

Teaching the New Generation

Finding work in a foreign country is never easy, but having a Ph.D., work experience and practical skills meant I did find a job after settling down.

Teaching programming has proven to be my vocation, as I enjoy seeing people go through what I went through in the 1990s. The elation when they complete their first application, the frustration and then joy of debugging and fixing errors and the satisfaction of making something that does what they intended. Research is something that I love doing, and this job means I get the opportunity to try new approaches and learn new skills while helping people discover coding for the first time.

The Future

I am not finished with learning, and I do not intend to ever stop.

I feel blessed to have had such a rich experience, and coding has truly transformed my life in the most positive way possible. It has allowed me to find my vocation and, in the process, to experience a huge range of occupations and experiences. I hope this story inspires others to take up the challenge of learning and becoming a proficient programmer. It is a skill that gives the ability to take control of technology and one that can open doors and create unimaginable opportunities.

Read More
Robert Polding Robert Polding

How worried do we need to be about encryption?

There is a lot of debate right now about the importance of encryption. I’ve seen the dark side of not using encryption on a network when a hacker highjacked my email credentials in the early days for WiFi and my PayPal account hacked (and yes, I did have some money taken from my account, but the bank covered it, thankfully!)

Since my unfortunate experience, I have always been an advocate of using encryption where necessary. Whether it be on a public WiFi (using VPN) or when backing up my data onto encrypted hard disks. However, there is an essential point to take into consideration - that encryption also hampers law enforcement efforts and having encryption everywhere could make the police’s job challenging.

In recent news articles, it has been reported (for example, https://www.reuters.com/article/us-apple-fbi-icloud-exclusive-idUSKBN1ZK1CT) that Apple has cancelled its effort to encrypt user’s backups on their servers with end-to-end encryption. Many journalists have rushed out to report that this is an example of them not taking user privacy seriously and giving in to the FBI’s demands. Much more needs to be considered for this assumption to be the only reason for the decision.

Firstly, Apple must receive a considerable number of requests from people who legitimately lose their devices, and want to recover data. If it were to use end-to-end encryption, and Apple did not know the key, then it would be impossible for them to recover data. It’s not like they’re storing the data on the servers with no encryption at all, they are encrypting everything in iCloud - just not using end-to-end encryption. This means that Apple has a key and they can unlock if you lose your password, device, or if the authorities request access. The only issue would come if the data were released and then the authority in question leaked some personal information or got hacked. Then there is the potential for misuse of data.

Secondly, if the FBI, or any other official investigation agency, need to find information, it is because someone has done something terrible. They’re not going to go demanding Apple give out user’s data for no reason. By using too much end-to-end encryption, we are letting people get away with crimes. Finding criminals is something that needs to be possible for law enforcement; otherwise, murders, rapes and other horrendous crimes will go unsolved.

Thirdly, if people are so worried about dodgy information in their backups, putting it in a public cloud is never a good idea. You can back up securely yourself and ensure that your sensitive data is safe.

It is unquestionably a controversial topic, and everyone will have a different opinion. I do not mind if the police or any other authority has access to my data, if it will help in an investigation. If I were up to no good though, I’m sure my opinion would be different. While half of the Internet seems to be bothered by all this, I do not see it as a huge issue. Yes, encryption is good, but iCloud is encrypted. So, only people with good reason are going to be able to access the data. Keep using that VPN and stop worrying about your backups, unless the FBI are after you of course!

Read More
Robert Polding Robert Polding

Cloud computing- the invisible revolution

It all begins with an idea.

For most people, the move to cloud computing has been an almost invisible transition from local storage and processing to network-based services. For many, it is akin to some magic that makes everything available, all the time, no matter where you are. For businesses and network architects, it is the biggest game changer since the advent of networked computers and has allowed companies at any scale to gain access to secure, affordable and incredibly powerful infrastructure. It has also fostered the “everything as a service” business model that many organisations and individuals rely on for income, and it has also created an astounding amount of wealth for the infrastructure owners.

What is Cloud Computing?

Most users do not understand the concept of cloud computing. When undergraduate students are asked about what they consider cloud computing to be, at least in my experience, they describe the propagation of files across devices, and streaming movies from services such as Netflix and Amazon Prime.

There is far more to cloud computing than just streaming and file storage. The cloud is a group of services that are offered to large and small organisations and individual consumers. Consumers do not understand it because it is anything but simple to define and it is continuously changing. One of the reasons that it is difficult to understand is that it has been evolving at an incredibly rapid pace over the last decade. However, this is reflected in almost all areas of technology; the pace of change is exponential. Every time a definition for the cloud was generally agreed upon, the technology has outgrown it and changed in a way that has meant a new definition is needed.

The most widely accepted definition that has stuck is the National Institute of Standards and Technology’s (NIST). Firstly, there are five key characteristics, secondly, four deployment models and, finally, three service models. 

The first characteristic is that cloud computing is on-demand and self-service. It means the service is available when a user needs it and without the need to have an administrator (i.e. a person) to give access. It is something we have started to take for granted with the services offered over the cloud. For example, Netflix can be contracted instantly and cancelled at any time. For organisations, access to cloud services is offered in a similar manner.

The second characteristic is broad network access. It implies that cloud services have to be accessible easily and wherever a connection to the world wide web is available. For most users, this may seem blindingly obvious but achieving this on an international scale has been hugely challenging, especially outside the United States.

The third characteristic is resource pooling. It relates to the assumption that not all clients will need to be fully utilising their cloud computing resources at the same time. Therefore, cloud providers can allocate the resources of one client to another when that client is idle. It is usually done through the use of virtualisation, which means the data centres can set up shared servers that increase the efficient use of the computing power available. Experts claim that this can increase the level of utilisation of the servers from as low as 10% to as high as 80-90%.

The fourth characteristic is rapid elasticity. It is the ability to meet the needs of the users of a cloud service and increase the capacity available quickly and automatically. Artificial intelligence is handling the management of cloud systems in many modern data centres with areas such as power management and resource allocation increasingly requiring complex and every evolving algorithms.

The fifth and final characteristic is that cloud providers offer a measured service. It means that the service is monitored and the exact usage of clients is reflected in their costs. It means that if a client does not require a massive amount of storage and computing power, they do not have to pay for it. On the other hand, if they do need it, they can reach a certain threshold and automatically have access to a high level of service (whether it be more storage, RAM or processing power). It is essentially a pay-as-you-go model, much like a mobile phone that needs topping up when you run out of credit (but thankfully without the need to buy a top-up card to continue using the service).

There are four primary deployment models for cloud computing. The first is using a public cloud, which is the cloud computing model that most people know. It involves a system that is managed by an external organisation (for example Amazon Web Services (AWS), iCloud, Google, Microsoft, SAP and Oracle offer these types of cloud computing solutions). The customer is only responsible for the software that is installed on the cloud system, and the providers handle the actual day-to-day maintenance and security.

The second deployment model is private, and this is when, in contrast to public, the service is managed by the organisation themselves. The organisations manage and administer the system, and they are often accessed through a corporate Local Area Network (LAN) or Wide Area Network (WAN). Access to these services is often through a Virtual Private Network (VPN). These often provide exclusive services that give an organisation a competitive advantage or higher level of security for data that cannot be put on a public cloud (for example, confidential and legally sensitive data).

The third deployment model is the community model. These are cloud systems that are available to several organisations. It is often used for systems that require an extra level of privacy and which cannot be used on a public cloud for a variety of reasons. It is also a model that is suitable for organisations that want to share a service and the responsibility regarding maintenance and administration.

The final deployment model is one that is popular with larger organisations: the hybrid model. It is more expensive because it involves using a combination of both a public and a private cloud. It allows organisations to maintain a competitive advantage by having their own, in-house managed solution performing their critical business processes. It also allows them to benefit from the services offered by a public cloud that do not require the in-house management and administration (for example for collaboration and non-critical services in the organisation). Hybrid is by far the most flexible and forward-thinking model for cloud deployment and is becoming the norm for larger companies.

The NIST definition of cloud computing has three service models. Firstly, Infrastructure as a Service (IaaS) is the provision of computer systems, i.e. virtual machines, storage, networking and underlying computing power. These systems replace the IT infrastructure that organisations traditionally have in their server rooms or private data centres. Secondly, Platform as a Service (PaaS) is the provision of tools that allow the development of custom cloud applications on a cloud platform. Thirdly, Software as a Service (SaaS) is pre-written software that is normally available through a web browser, typically involving a subscription to gain access (for example, Salesforce CRM).

Cloud computing is, of course, not invisible. Visiting a data centre today is much like entering a prison. Armed guards and razor wire protect the vast arrays of servers, and they are generally in locations that we as humans find inhospitable. The colder and more remote the location, the more suitable for cloud data centres. These factors have allowed systems to run with less air conditioning and to tap into energy sources such as geothermal and renewable energy. Microsoft has even constructed a data centre under the ocean, where air conditioning is not needed, and wave power can provide energy. Computing has become more friendly to our environment thanks to the cloud, and the added physical security means the days of people physically breaking into server rooms in offices and installing rootkits are numbered. There will always be cybercriminals who are smart enough to take advantage of software vulnerabilities, and no system is ever completely safe. Nevertheless, confidence in technology is at an all-time high and businesses, in particular, are beginning to trust in the safety and reliability of cloud computers. 

The main advantages of cloud computing are that it is massively cheaper than acquiring hardware and building a custom solution. The costs are spread between all the users, so it is a win-win situation for clients and cloud providers. As previously mentioned, the cloud services are much more secure than a traditional server or data centre. The best security experts in the world are employed to secure the data centres, and they are protected like any valuable or sensitive resource. They offer scalability, elasticity, a measurable service model and give access to tools and platforms that were impossible before the advent of cloud computing.

There are some disadvantages though. In situations where the Internet is unreliable or inaccessible, cloud computing is not feasible. Offline storage, like that offered by Google, Apple and Dropbox, partly solves this but in the case of Software as a Service, lack of a decent connection makes the entire system a step back from traditional IT infrastructure. There have also been cases where cloud computers have proven unreliable. Recently, the Google cloud went offline for an entire business day, and it cost companies that rely on the service millions in lost revenue. There are also situations where having data ownership transferred to an external organisation can be illegal (for example, in the case of legal firms) or open to security threats due to flaws in the software used by the cloud providers.

History of the cloud

Now we have an idea of what the cloud it, it is essential to understand its origins. The term cloud computing can be traced back to Amazon and its Elastic Compute Cloud (EC2) back in 2006. Amazon invested billions in the 1990s in creating a worldwide network to support their e-commerce business. However, they never used the vast majority of the capacity. They decided to start to offer this extra capacity to other organisations. Before doing this though, they wanted to create a system that everyone would want to use, and they developed a toolset that would allow the deployment of advanced cloud applications. They first moved their entire e-commerce system onto the new cloud platform and ensured that it was not just good for their clients but also something they wanted to use. Once they were satisfied with the system themselves, they started offering the service to others. 

Amazon Web Services (AWS) was born. To understand how successful this has been for them, we can look at how this system is performing today compared to their e-commerce system. Amazon is now the largest retailer in the world, but AWS represented 56% of their operating profit in 2016 (according to the New York Times). The reason this has become their primary source of profit is that, back in 2006, they offered something unique. AWS was affordable and offered more coverage of geographical locations than any other provider. This global coverage enabled services like Netflix to expand and become truly global. The tools and services Amazon offer are also incredibly flexible. Unlike other cloud providers (i.e. Microsoft, Oracle), they allow any software to be installed and run on the service. Much like the e-commerce side of the business, they have monopolised the cloud sector, and they have proven incredibly popular with businesses ranging from global enterprises to small startups.

AWS is not the only driver of cloud adoption. The network infrastructure of the internet and the advent of fibre optic connections replacing copper lines has meant cloud has flourished. Users began to experience no difference between local and cloud networked applications. The advent of mobile connections has meant that cloud applications are available anytime and anywhere, without the restrictions of being on a particular physical network. Technological advancements have meant consumers and businesses rely more on technology to make important decisions and the reliability of the information from modern enterprise systems have meant improvements across all areas of business on an unprecedented scale. All this has resulted in trust and dependence on technology to a point where the entire landscape of business IT has changed.

Another aspect that has affected software development is the speed and flexibility of the services offered by cloud providers. Changes to services and products traditionally required the purchase of new hardware, deployment and testing that could potentially take days or weeks. Spinning up a new instance on a cloud system take minutes or hours and users can be testing the software immediately. This new development and deployment model has led to the DevOps movement and users of services such as Google becoming unwitting testers, with changes to their service being deployed on a daily basis to select groups of users.

Almost all the services we use today whether in our work or personal lives rely on cloud computing, and in the next two sections, we are going to see how there different areas of our lives have been impacted.

Business use

The area affected most profoundly by cloud computing is business. It has enabled the convergence of different industries into single services. For example, Google are able to offer movies, music and YouTube video through their cloud servers. It has also meant that many traditional industries have had to evolve and begin to offer their products as a service. Those that have not evolved have either seen a dramatic change in their profitability or even bankruptcy.

For most businesses, as mentioned previously, the main benefit has been a reduction in operational costs. The ability to have high-quality network-delivered services has meant they can have access to the most powerful infrastructure available without having to buy the physical IT infrastructure.

Upgrade and maintenance have become considerably easier for organisations. In the past, when computer systems were managed in-house, it was a considerable undertaking to ensure that software was updated and security systems maintained. While there is still a need to manage these on private and hybrid cloud systems, the use of public cloud system has meant the updates and upgrades can be left to the experts who manage the cloud data centres. New features can be added to platforms and rolled out to either all or select users depending on the maturity of the software (i.e. if it is in beta or a final release state). Software as a Service (SaaS) provides users with upgrades seamlessly and means software management is a thing of the past. Users do not have to worry about which operating system they use and administrators do not have to tackle complex software dependencies if they are using SaaS.

Some industries cannot use the public cloud. Restrictions and regulations in areas such as government and law mean cloud computing is not feasible. These areas of industry have to rely on private cloud systems or traditional Local Area Networks depending on how strict the regulations are.

In 2007, Dropbox was founded by two MIT students and this became the first successful provider of cloud storage. It provided a service that was embraced by consumers and businesses alike and provided a glimpse of the future. Google and Apple later developed systems with very similar features and sharing large files between devices and people became a reality.

Consumer use

Apple was one of the first companies to invest heavily in cloud technologies, mainly due to it having released the iPod and later the iPhone, which created a need to share data between devices. The initial solution involving plugging all the devices into a Mac was counter-intuitive and inconvenient for users. Apple changed its strategy in 2011 from having a home “hub” for entertainment (i.e. the Mac) to providing a way to sync all the devices in their ecosystem through cloud computing. The key strategy they applied was for the consumer to have no idea that the cloud even existed. Consumers were only conscious of the fact that when a photo was taken on one device, or a file saved, or music bought, it would be propagated to all the other devices. This approach has since been adopted by most major technology providers, especially those involved in mobile technologies.

Music and movies were one of the primary reasons for the adoption of cloud technologies. In 1997, Netflix started their DVD rental service, and ten years later, in 2007, they took the service online. In 2008, the Swedish streaming service Spotify was founded and took the model to the music industry. It took a while for other companies to follow suit. In 2014, Apple launched the iTunes Match service that allowed customers to put their entire MP3 collection online and, if it was part of iTunes, access their music directly from the high-quality iTunes library which was, naturally, stored in the cloud (even if they only had poor quality MP3s). In 2015, Apple Music was launched as a direct competitor to Spotify.

By the mid-2010s, the impact of Spotify, Netflix and Apple were so profound that even traditional cable companies had to change the way they deliver their services. For example, Telefonica in Spain created their cloud streaming service that meant they could remain competitive with the streaming services. They transformed their offerings, making it looked and behave much like Netflix. Consumers could watch any program, on-demand (for 7 days). Internationally cable providers began offering on-demand services. Despite efforts from cable companies they have continued to lose customers as people “cut the cord” and move to services such as Netflix. It has become a trend that young people are mot even signing up for the traditional services. 

Recent years have seen a dramatic reduction in illegal copyright infringement due to the new digital services. The main reason for this is the new level of affordability of digital products compared to the cost of the physical products they replaced. Even video games and software have adopted the model, and this has brought massive revenue to an industry that was almost destroyed by digital theft in the 1990s. In 2018, game developers are offering games on a subscription basis, much like Netflix, where customers can access hundreds of games by paying a small monthly subscription.

The impact of the cloud on everyday life has been profound. We no longer have to go to a computer to access all our files and services and working between devices has become seamless. For example, if we need to access contacts, they are always available, and even areas like video games have been affected. Now, having a powerful computer is not necessary to be able to play the most complex and processor-intensive games. Nvidia, one of the biggest graphics card manufacturers, has created a system that allows us to stream games from cloud data centres. All the graphics and processing is done on their servers and, with a fast internet connection, it is possible to play games on any computer with the same resolution and speed as a machine costing thousands of dollars. 

The future, according to scientists, is the Web 3.0. It fundamentally relies on cloud technologies to provide us with an “intelligent” internet that can is accessible through virtual assistants that utilise artificial intelligence. Cloud computing means the complex and computationally intense tasks of understanding natural speech and working out what a person needs (not just what they said) can be offloaded from the relatively underpowered mobile devices that we use to interface with the web to the scalable and ultra-fast systems stored in cloud computing data centres. It will mean the web will be able to predict and figure out what we want. In the future, the combination of mobile devices and the intelligent web will mean we no longer need to search and interact with the Internet as much manually. 

Privacy is the biggest concern when using the cloud and technology companies have taken dramatically different approaches, all with benefits and drawbacks. Google presumes that their consumers do not mind their personal data being processed in the cloud. For example, they analyse the photos and information that users upload to provide smart image tagging and recognition. Apple, on the other hand, believe their consumers would not want them processing their data in the cloud and restrict processing to their mobile devices. Google’s method is faster but with the compromise of users not maintaining control and complete privacy, Apple’s users have a less speedy service but maintain control and privacy.

Conclusions

Cloud computing has impacted almost all aspects of our digital lives and will continue to do so as technologies become even more pervasive and entrenched in our routines. Most people are entirely unaware of the cloud and how they use it, mainly due to it being complicated and challenging to define. Businesses have seen massive cost savings, and improvements in availability, security and they are more confident that their systems are scalable and can meet all their needs to support decision-making processes. Consumers have seen the way they buy and view media change, to the point where the world of physical media that existed 20 years ago seems nothing but a distant memory. Mobile and cloud have allowed devices to be instantly in sync, allowing devices to work together seamlessly. The future is going to see us speaking to and interacting with artificial intelligence systems that will be able to tap into the vast data stored in cloud computing data centres. This will bring a new kind of world wide web that will be able to intelligently help us in almost every aspect of our lives, whether business or personal. The cloud has enabled this generation of applications and systems and is going to bring us far more life changing advances in the coming years.

Read More
Robert Polding Robert Polding

Databases: Evolution and Change

It all begins with an idea.

Databases are a part of everybody’s daily routine, even people who do not own a computer or mobile phone interact with them regularly. When we take out money from an ATM, check our bank balance, shop online, view social media or perform almost any digital interaction, we are accessing a database. 

“Probably the most misunderstood term in all of business computing is database, followed closely by the word relational” (Harrington, 2016). Thanks to a mass of misinformation, many businesspeople and technology workers are under the false impression that designing and implementing databases is a simple task that administrative staff can easily do. In reality, designing and implementing a database well is a huge challenge that requires analysis of an organisation’s needs and careful design and implementation. 

Some people claim that traditional structured databases are a thing of the past. While this may be true from some perspectives (for example, for developers with websites that have millions of users in areas such as social media), for the rest of us structured databases are still very much a part of our lives. Changing requirements and the evolution of the Internet have meant that new types of databases have emerged but they have specific uses.

Databases are essentially software applications. A database management system (DBMS) is the name of the software that provides data to other applications, allowing all the digital information systems that we interact with today. Often, a DBMS is referred to as a database. There are many vendors and solutions with differing standards and uses. Data is shared with a variety of standards, but primarily they all serve the same purpose, which is to provide applications with data. The applications then process the data and turn it into something useful for the users: information.

The primary objective of the article is to define and explain databases in a way that anyone can understand. The idea of a one-size-fits-all database is impossible, and this article argues that there are different types of databases for different types of technology projects. This article explores the history of databases, looks at the differences between traditional and modern models for data storage and retrieval, and finally examines the new types of data challenges that we are facing in business intelligence and big data. 

Early history of databases

Before databases existed, everything had to be recorded on paper. We had lists, journals, ledgers and endless archives containing hundreds of thousands or even millions of records contained in filing cabinets. When it was necessary to access one of these records, finding and physically obtaining the record was a slow and laborious task. There were often problems ranging from misplaced records to fires that wiped out entire archives and destroyed the history of societies, organizations and governments. There were also security problems because physical access was often easy to gain.

The database was created to try and solve these limitations of traditional paper-based information storage. In databases, the files are called records and the individual data elements in a record (for example, name, phone number, date of birth) are called fields. The way these elements are stored has evolved since the early days of databases.

The earliest systems were called the hierarchical and network models. The hierarchical model organised data in a tree-like structure, as shown in fig. 1. IBM developed this model in the 1960s.

Fig. 1 The hierarchical database model

The hierarchical model represents data as records which are connected with links. Each record has a parent record, starting with the root record. This is possibly the most straightforward model to understand because we have many hierarchies in the real world - in organisations, the military, governments and even places like schools. Records in the hierarchical model contained one field. To access data using this model, the whole tree had to be traversed. These types of database still exist today and do have a place in development, despite the significant advances in the technology. They are, for example, used by Microsoft in the Windows Registry and in file systems, and they can have advantages over more modern database models (speed and simplicity). However, there are also many disadvantages, the primary being that they cannot easily represent relationships between types of data. This can be achieved through 25complex methods (using “phantom” records), but to accomplish this, the database designer has to be an expert that understands the fundamental workings of these systems.

The hierarchical database did solve many of the problems mentioned above with a paper-based system. Records could be accessed almost instantaneously. It also had a full backup and recovery mechanism that meant the problem of lost files due to damage was a thing of the past. 

In 1969, scientists at the Conference on Data Systems Languages (CODASYL) released a publication that described the network model. It was the next significant innovation in databases. It overcame the restrictions of the hierarchical model. As shown in fig. 2, this model allows relationships, and it has a “schema” (a diagrammatic representation of the relationships).



Fig. 2 The network database model

The main difference between the hierarchical model and the network model is that the network model allows each record to have more than one parent and child record. In fig. 2, the “Client”, “Supervisor” and other boxes represent what in database terminology are called entities. The network model allows entities to have relationships, just like in real life. In the example, an order involves a customer, supervisor and worker - as it would if a client walked into a store and bought a product.

The network model did improve on the hierarchical model, but it did not become dominant. The main reason for this is that IBM continued to use the hierarchical model in their more established products (IMS and DL/l) and researchers came up with the relational model. The relational model was much easier for designers to understand and the programming interface was better. The network and hierarchical models were used throughout the 1960s and 70s because they offered better performance. The mainframe computer systems used in the 60s and 70s needed the fastest possible solutions because the hardware was extremely limited. However, the 1980s saw tremendous advances in computing technology and the relational model started to become the most popular.

The relational model was, like the network model, described in a publication in 1969. The relational model describes the data in a database as being stored in tables, each containing records with fields. An example could be a customer table, which could include the following fields:

Customer: 

  • customer id

  • first name

  • last name

  • street address

  • city

The type of data for each field is predetermined (for example, text, number, date) and this helps ensure there are no inconsistencies and the output is what the applications need (it helps, for example, determine how to sort data). These tables can have relationships in a relational database, and different types of relationships exist. Common types include:

  • One-to-One

  • One-to-Many

  • Many-to-Many

These allow the designer to show how one table relates to another. For example, a customer will probably buy many products. Therefore one customer can be associated with many products (this is a one-to-many relationship). These relationships also allow the database designer to ensure the database will work well when applications access it and helps with troubleshooting problems.

Relationships can be mandatory (or not), and this helps to maintain the integrity of a database. For example, if a product has to be associated with a manufacturer to exist in a database, then a rule can exist that only allows the addition of products if they have an associated manufacturer. It means that there is less scope for error when the database is deployed. 

Most relational databases use a standard method for accessing the data: the Structured Query Language (SQL). SQL allows an application to gain access to the data needed by a user. It can either retrieve all the data from a table (or even a database) or just one individual field, determined by a set of criteria. For example, an application may only require the name of a professor associated with a particular course and they may not need any more data from the tables.

The main advantage of the relational model is that it provides consistency in the data. The model implements a set of constraints and these ensure that the database functions as intended. The relationships and resulting constraints are developed through studying the environment in which the database operates. It is one of the key reasons that database design is not as simple as most people think. The real-world relationships between the entities have to be determined so that the database functions correctly. This analysis involves studying the previous paper-based record systems and interviewing employees and suppliers in an organisation. Project managers or analysts have to do a strict and thorough requirement analysis before a database can be populated and used. It ensures that a system will not be able to do anything that would cause errors or incorrectly represent the real-world situation of the data.

1980-1990

Since the relational model was created in the late 1960s, it has changed little. Modern businesses still use these systems to record their day-to-day activities and to help them make critical strategic decisions. Database companies are among the largest and most profitable organisations in the world, and companies founded in the 1960s and 70s are still thriving today.

The key identifier for a traditional database is the type of data that it handles. It contains data that are consistent and in which the fundamental nature does not significantly change over time. It was more than adequate for all but the most complex types of data storage for decades. 

In 1977, Larry Ellison, Bob Miner and Ed Oates formed a company in California called Software Development Laboratories (SDL) after reading about IBM’s System R Database (which was the first implementation of SQL). They aimed to create a database that is compatible with System R. In 1979 this company was renamed to Relational Software, Inc (RSI) and then finally Oracle Systems Corporation in 1982. Oracle would go on to be the biggest and most profitable database vendor in the world. They developed their software with the C programming language which meant it was high performance and could be ported to any platform that supported C.

By the 1980s, there was more competition in the market, but Oracle continued to dominate in the enterprise. Towards the end of the 80s, Microsoft developed a database for the OS/2 platform called SQL Server 1.0. In 1993, they ported this to the Windows NT platform and due to the adoption of Windows technology at the time, it became the standard for small to medium-sized businesses. The development environment that Microsoft created in the mid-to-late 90s (visual basic and then .NET) meant that anyone, not just long-term experienced developers, could harness the power of databases in their applications. By 1998, they had released SQL Server V7, and the product was mature enough to compete with the more established players in the market.

In the early 90s, there was another database created that would have a more significant effect than any other, at least for the online market. The mid-1990s brought about a revolution in software development. It came about to combat Microsoft’s dominance and tight control of the code used on most PC systems in the 90s, and the open-source movement was born. They did not believe in proprietary, commercial software and instead developed software that was free and distributable (as well as having the code publicly available). In 1995, the first version of MySQL was released by a Swedish company (who funded the open source project) - MySQL AB. This software was the first significant database of the Internet and continues to be used by companies like Google (although not for search), Facebook, Twitter, Flickr and Youtube. The open source license gave freedom to website developers and meant they did not have to rely on companies like Oracle and Microsoft. It also worked well with other open source software that created the foundation of the Internet we use today (Linux, Apache, MySQL and PHP (LAMP) became the most common setup for websites). MySQL AB (the company that sponsored the MySQL project) was eventually acquired by Sun Microsystems which was subsequently acquired by Oracle.

In the following years, many other open source databases were created. When Oracle acquired MySQL, a founder of the MySQL project made a fork of the project (i.e. he took the code and started a new project with a different name). This new project was called MariaDB. There are now numerous open source databases that have different licenses and ideologies.

Post-2000 and NoSQL

So far in this article, all the databases mentioned have used the Structured Query Language (SQL) as the main way to retrieve and store data in a database. In 1998, a new term was coined, namely NoSQL. It refers to “non SQL” databases that use other query languages to store and retrieve data. These types of databases have existed since the 1960s, but it was the Web 2.0 revolution that made them come to the attention of the technology world.

Web 1.0 was the first iteration of the Internet when users received and ingested content created by webmasters and their teams. Web 2.0 was the shift to user-generated content and a more user-friendly internet for everyone. Sites like Youtube and social media epitomise this phase of the Internet. For databases, it meant the needs of developers and administrators had changed. There was a vast amount of data being added to the Internet by users every second. Cloud computing unlocked massive storage and processing capabilities and the way we use databases changed.

In this age of technology, the requirements shifted towards simplicity regarding design and scalability due to the vastly growing nature of the new Internet. It was also essential to have 24/7 availability and speed became of utmost importance. Traditional relational databases struggled particularly with the scalability and speed required, and due to NoSQL using different data structures (i.e. key-value, graph, document), it was generally faster. They were also viewed as being more flexible because they did not have the same constraints as traditional relational databases.

There were some disadvantages to NoSQL, in particular it was able to use joins across tables and there was a lack of standardisation. For the new generation of web developers, though, NoSQL was better. It was one of the main reasons for the massive innovations that took place in the first two decades of the 21st century, because website (and later app) development was made much easier and it could cope with the growing nature of the World Wide Web. Relational databases continued to have their place, despite the shift away from them in the online world. Businesses still needed the reliability, consistency and ease of programming for their business systems.

Business intelligence

Computers have transformed the way businesses operate. In the past, decisions were made based on the experience of the most highly paid managers and executives. However, trust in computers and information systems is now at a new high. It is due to the reliability of systems that work in the cloud, advances in technology and the fact that decisions based on fact are proving to be more reliable than those taken based on information from experienced managers and executives (i.e. guesswork).

Business intelligence is the analysis of data and information in an organisation to find insights and trends that can help make decisions. These decisions are not just those taken by executives, but ones taken throughout an organisation. From the smallest, most mundane choices that secretaries and administrators make to decisions that put millions of dollars at stake. 

Databases have allowed companies to develop incredibly sophisticated enterprise resource systems (ERP) that gather data from every part of an organisation and store it all in a central database. Data is collected from factories, offices, remote workers, sensors and anywhere that useful and quantifiable data exists. Companies like Oracle and SAP provide solutions that can cost up to $15m for global organisations but which can save them up to 50% in operating costs (taken from the case study: Orange/France Telecom) thanks to improved efficiency and better forecasting.

Business intelligence (BI) systems are not suitable for all types of organisations. The data has to be accurate for the system to give information that can use used in decision making. If an organisation cannot gather the data in real-time (for example, due to a poor connection to the Internet), then BI systems will harm an organisation because the decisions will be based on out-of-date information. The insights that BI systems give have to be carefully chosen and relevant. If not, the insights will be a reinforcement of information that a company already knows. The information has to be timely, so it is available when it is needed. It also has to provide conclusions that are realistic; if a BI system concludes that the competition needs to be eliminated then in most cases, it is a useless conclusion because eliminating the completion is not possible.

Executives and managers can now see real-time information on their organisations and can use this to help them understand more about the decisions they need to make. Systems have to be designed to provide the right information to the right person at the right time. It has led to a trend of firing more experienced managers and replacing them with younger, digitally native employees. One manager can be replaced with three young people for the same cost, and this is a disturbing trend (at least for the older population) that is currently being seen all over the world in developed nations.

Other business databases

Databases also allow organisations to work more effectively with their customers and suppliers. They augment workers, allowing them to do their jobs better and faster. They have also created the digital businesses we use every day, like Amazon and eBay.

Customer Relationship Management (CRM) systems allow organisations to build strong customer profiles from the moment they become a lead (i.e. when a customer first contacts an organisation). They allow for targeted marketing, better communication and are also becoming more connected with social media and other platforms that are commonly used for customer service and marketing. 

Supplier management has become much easier thanks to Supply Chain Management (SCM) systems. These allow organisations to do the (previously) impossible. For example, they can fulfil orders made at the last minute and automatically coordinate thousands of suppliers and logistics companies to ensure products reach customers on time. SCM systems can be used to look at the feasibility of a customer request and to ensure that enough of a product will exist at times of peak demand. Large-scale SCM is always a challenge, and even companies like Nintendo and Apple cannot cope with the level of demand their products attract, despite having state-of-the-art systems.

Traditionally, ERP, CRM and SCM systems have been the domain of multinationals with multi-million dollar budgets. The startup culture of the last 20 years has spawned alternatives to the SAPs and Oracles of the world. One of the best examples is Salesforce, a CRM that takes advantage of recent mobile and cloud services and which offers their CRM system using the Software as a Service model (so the software is cloud-based and delivered using apps and web browsers). This type of service is much cheaper than traditional providers, and this means that even the smallest startup can afford to use and benefit from having a customer database. Open source system are also freely available (for example, SugarCRM) and can be deployed with no upfront cost. However, support contracts and hiring of programmers and administrators will never be free.

Big data

Before the cloud brought cheap, affordable storage the only people who could analyse non-structured bulk data were scientists. The European Organisation for Nuclear Research (CERN) has been analysing unstructured data since the 1960s. In the Large Hadron Collider, they have had to analyse particles that collide at 600 million times per second. These analyses are done using a countless number of rapidly taken photos, and this involves a massive amount of storage and sophisticated algorithms. It was these scientists who first started analysing what we now call big data. Traditionally, big data has three primary attributes: volume, variety,  and velocity. Volume refers to the amount of data (i.e. a high volume), variety refers to the fact that it is unstructured and velocity refers to the rapid rate at which it is created.

Big data is not just voluminous data (i.e. a lot), it is also data that is unstructured. Data is no longer only produced by employees, sales systems and factories; we now have to deal with data from social media, sensors, video, audio, scanned documents and many other sources. Analysing this data is almost as important as analysing the traditional data we get from our business intelligence systems (indeed, many modern BI systems are beginning to analyse unstructured data sources too).

There are lots of data that we need to analyse today. Typically when the term Big Data is used, petabytes and exabytes of data are being analysed. When this amount of data is stored, there are many difficulties. Storage media fails, computers are unable to cope with the amount of data and writing algorithms to handle all the different types of data is a considerable challenge.

Google was one of the first companies to be confronted with the problem of dealing with a vast amount of data. They wanted a way to improve their batch processing of the World Wide Web, and by running lots of tasks in parallel using many individual computers, they achieved much better results. They published a framework that they named MapReduce. They have now moved on to use another framework for their search, but MapReduce was significant because it resulted in the formation of the open source Hadoop project. They created software that allows anyone to set up large-scale data analytics, either on dedicated hardware or in the cloud.

By analysing this massive amount of data, organisations can examine product launches, consumer reactions, marketing campaigns, and customer support (and much more, of course). In the future, big data will have even more of an impact. The Internet of Things and the new sensors we are going to have in smart factories, connected cars, smart cities and smart homes will mean that there will be much more data generated in every part of society.

Big data affects much more than just business and nuclear research. Police are using Big Data to analyse trends in crime using all the data they have from the past. They are combining this with information from social media, and they are using big data to predict when and where crimes and public disturbances will take place. Google is using searches to predict many things in society. Google Flu Trends used search analysis to predict where outbreaks of the flu virus would occur. They managed to accurately predict outbreaks two weeks before medical experts and traditional warning systems could. Big Data is also being used by meteorologists, seismologists and throughout science to analyse the past and see what the future is likely to hold.

Conclusions

Databases have come a long way since their creation in the 1960s. Initially, they were a solution to the problem of storing and protecting the things we wrote down and making it more accessible at a faster speed. Over time, they have become integral in our society, and we rely on them for banking, security, policing and in providing the services for our digital lives. For companies, business intelligence systems are helping to make more accurate decisions based on real facts, rather than guesswork based on experience. Big Data is helping us find new insights from the data we have generated in the past and will be vital in understanding the society of the future. Without databases, we would still be losing valuable information and the digital revolution would not have happened. The coming industrial revolution, also called Industry 4.0, will be driven by data, and it will transform the lives of every consumer and business in the world.

Bibliography

Harrington, J., & Harrington, J. (2016). Relational database design and implementation : Clearly explained (4th ed.). Amsterdam: Morgan Kaufmann/Elsevier.

Video Case Studies

Orange and Oracle ERP - https://www.youtube.com/watch?v=jsqFQiCmaFs

CERN and Big Data - https://www.youtube.com

Read More
Robert Polding Robert Polding

How Artificial Intelligence will drive change

It all begins with an idea.

One topic that my students are interested in and terrified of at the same time is the effect that Artificial Intelligence will have on their future, both in their personal and professional lives. They are concerned that the degrees they are studying will not lead to jobs as AI begins to master new abilities. Systems like IBM's Watson and Google's DeepMind are already capable of performing many complex tasks and each year there is progress towards AI that can be genuinely useful in everyday life.

Watson can produce business analytics reports in a fraction of the time it takes a consultant, and DeepMind can solve a problem without being given specific instructions. However, we are a long way from super-intelligence and most AI we see today is weak. Strong AI is on the horizon, and when the intelligence of AI surpasses humans, we will be relying on its abilities to get through the day.

There has been much speculation about how AI will impact work. To put this in perspective, think about the current workplace. What many people forget is the effect computers and technology have already had. Software has almost replaced finance departments and the stock markets are ruled and governed by algorithms.

We now get our work done faster and more efficiently; we have already begun to benefit from this. We can now communicate easier and without the need to travel to do our jobs (as much). AI will further improve this augmentation and will mean we can improve our work-life balance like never before. We are not going to become obsolete but instead enhanced.

Every industrial revolution has made people afraid of the power of new technologies in replacing skills. In fact, technology does not just do the same work, it usually does it better, faster and more efficiently. When factories came into existence, people feared they would be made obsolete, but we found new ways to employ people. The same will occur in the fourth industrial revolution. Machines will be able to work harder and longer than factory workers and will not be tired and ill like humans. Factory work in itself is a sad prospect for a person, when it involves primarily physical and repetitive tasks, and automating this sector of work is not just going to increase profit but also reduce degradation.

As sites like Amazon's Mechanical Turk prove, there is a considerable need for a human element in AI. Mechanical Turk is an online marketplace for workers who can help AI, performing tasks such as tagging and classifying content. In the coming years, there will be much more demand for these kinds of workers, and it will likely be more of a common occurrence for people to be able to work from home and spend more time with the ones they love, rather than performing tedious, monotonous tasks in a factory. Recent trends suggest that people are becoming more necessary in moderation of video and social media sites and there will undoubtedly be more of a need for workers in this area in the future.

New kinds of work are also emerging. The gig economy is already proving popular with companies like Uber and Deliveroo providing services that are disrupting traditional industries. Being able to choose when you work and using technology to augment their skills (for example, with Uber drivers using mapping instead of learning the streets) means people can have more flexible workdays, and they can adapt as to their family's needs. Upwork and Indeed are providing places for people with technical skills to be paired with clients, and this is opening up new opportunities and markets for people all over the world.

The future may also see a basic minimum income if there is a real shortage of jobs, allowing the economy to continue even though many people may not be working. This will transform our lives and could result in us spending more of our time helping friends and family rather than helping large corporations to make more money.

Algorithms have already become part of our daily routine. The nature and purpose of these algorithms will have a profound effect on people in the future. They will choose what we see (for example on social media), how much we pay for services, the opportunities we have based on our profiles, and they will become more integral to decision making in organisations. Bad players will always exist, and choices we make about our digital life will affect the physical world in a way that has never been seen before.

Like always, we will find a way for our society to continue. While change is coming, and at a faster rate than before, it will mean a future where machines and people work together in ways only previously imagined in science fiction.

Read More
Robert Polding Robert Polding

How Biotechnology will change our future

It all begins with an idea.

Biotechnology is not a new form of technology. We started manipulating nature when we first created alcohol and vinegar. Modern biotechnology comes from research into recombinant DNA in the early 1970s. This involved manipulating and changing DNA to have new and enhanced properties. We are already seeing more people gaining access to food and new drugs that are improving people's lives. This article is going to explore several areas of Biotechnology. The topics will include genetically modified (GM) food, drug research, 3D printing, DNA sequencing and longevity. The aim is to make the subject understandable and accessible to everyone.

The first topic is both controversial and life-saving at the same time. GM food has been an area of research that has divided opinion. Many consider it in a negative light because it is going against the natural world's rhythm. They fear that playing with nature will cause untold harm to us and the world around us. There is no concrete answer to their concerns. Biotechnology is such a new field of research that the full effects, whether positive or negative, have not yet manifested. The use of GM food has had two primary purposes: feeding those without access to food and maximising profits. GM has already allowed crop development in areas where plants do not flourish. For example in dry, hot, dark and other environments hostile to life. GM crops have enabled seeds to grow in the most extreme situations. The other use for GM has been in aesthetically improving commercial vegetables. This type of research is not as ethically sound. There is always a risk with GM products; they could result in unknown changes to the natural world. If the only result is a physical change to a plant, then the research will only benefit the profits of food retailers. Playing with nature for profit has become the norm in some countries. There have also been advances in growing organisms in other ways. There is less risk to the natural environment when GM organisms are kept in the lab, rather than the farmer's field.

We have seen that seeds have improved, but this is only a small part of Biotechnology. New ways of creating biological organisms have come to light recently. We have been going beyond traditional techniques of producing biological organisms. Some essential drugs come from natural sources, and this limits the available supply. New technologies now exist that allow us to create unlimited quantities of life-saving medicine. Vast amounts of synthetic artemisinin, a malaria treatment, are being made using yeast. It was impossible in the past to grow enough of the natural precursor to provide for all the sufferers. Artemisinin is not the only drug or substance that can be manufactured using a technique like this. Food manufacturers have also moved into this area. For example, the "impossible burger" contains a substance that smells and looks like the natural juices in meat. It has resulted in a vegetable-based burger that is almost indistinguishable from real meat. This could have profound effects on our future; mass production of organic compounds in labs could mean we rely less on agriculture. New meat alternatives could mean that the pollution and ethical dilemmas of animal farming could be a thing of the past. It could also provide the raw materials for 3D printers.

Additive manufacturing, or 3D printing, is being combined with biotechnology. Many academic papers already explain how scientists have started developing 'bio-printing'. This is the construction, cell-by-cell, of real biological organs using advanced 3D printers. It will mean we can grow new livers, lungs, hearts or any organ in an animal or person. The 3D printed implants are made from the cells of a patient. As a result of this, there will be no rejection like traditional transplants. This will mean we can help terminal patients and they will not have to wait for operations. This type of technology is not yet with us today though. Scientists are starting to move from concept to animal testing; experts predict at least another decade before this type of technology become a reality for humans.

Biotechnology is benefiting us today, even though some of the technologies are but blips on the horizon. It took 13 years and $3 billion to sequence the human genome for the first time. Now, it can be done in a matter of hours for hundreds of dollars (and this is improving every year). This means we can analyse DNA and find why a patient has been suffering, or we can predict and prevent future illnesses and disease. While there will undoubtedly be moral panic at these possibilities, it presents an opportunity for us to suffer less in the future. The manufacture of new organs and the ability to analyse a person's genetic makeup have many potential benefits. One of the most puzzling questions for scientists has been whether our brain could cope with a body that lasts much longer. Presently, our minds begin to encounter many problems as we grow older. The most debilitating of these involve diseases that cause the brain to deteriorate. For example, Parkinson's and Alzheimer's disease. Recently scientists have made phenomenal progress in researching solutions to reverse these conditions. Rats with similar symptoms to these diseases had genetically modified cell transplants. They started to show dramatic improvements in their symptoms. Genetic treatments could mean we could live many more years than today and they offer a glimmer of hope in the future that growing old may not mean losing who we are.

Biotechnology is revolutionising the future of medicine and food. It is a field that has been born from human ingenuity, and it has allowed us to change nature. Many people are afraid of this technology, but when done for the right reasons it can improve the lives of the most vulnerable in the world. It has allowed us to feed the poor, create food and even manufacture biological organisms using 3D printing. Medicine is being a revolutionised, and we will potentially increase our lifespan and resistance to disease. While some of these things are directly benefiting us today, it will take many more years before biotechnology makes its true mark on the world. People may be able to live for hundreds of years more, diseases could be eradicated, and everyone in the world could be fed thanks to this exciting and innovative area of science.

Read More