Log in

No account? Create an account

The KCDCC Curious Room Project

Recent Entries

You are viewing the most recent 25 entries.

27th April 2006

illykai10:45am: This is a quick summary of yesterday's meeting:

  • Mary emphasized that the purpose of our current application should be to display information. It is not intended as an art gallery. The focus of the application should be on informing. The information display to be a whole bunch more informative, for instance through more specific images in the database (which Owen should make). People should leave knowing what it's actually about.

  • What learning is to be done by an agent that manages such an application? The learning itself is embodied in the behaviour policies that are developed in the action table of the agent's Q-learner. Our hope is that these learned actions will be related to people heading to the information display and staying there.

  • A question that follows is then: How can the elements of the learner and the world state representation be structured to favour these kinds of behaviours developing? For instance if a state can't be detected due to the structure then a learner can't tell if it's brought it about and hence can't learn it. We want to avoid the agent thinking of all world states as being the same (clustering to the same neuron) and we want the agent to be able to tell whether people are standing around the display (so detecting changes is little use for the pad sensors since someone standing still will cause no change and hence not be detected).

  • Kathryn has removed the clustering layer from the learner, leaving only the habituation layer. The result was that more was interesting, but the agent could no longer generalize about interesting things.

  • What if intrinsic motivation were used until someone steps on the pads and then regular reinforcement learning were used with pad activation directly providing an extrinsic reward signal to the learner? Intrinsic motivation could be use by the agent while nobody is on the pads in order to produce interesting behaviours such as cycles and patterns like with our lighting experiment.

  • Why choose intrinsically motivated learning over random behaviours to attract people to the display? Maybe people will react better to the motivated learner.

  • The agent doesn't currently understand the significance of adjacency or of the display's tree structure. Owen needs to come up with ideas for structuring observations such that the agent has more awareness of its situation whilst at the same time keeping the state space limited in size.

  • We need to finally get that ethics form in. Owen is now in charge of this. Kathryn is going to ask Robert for a copy of his one. The report should specify that we will abstract out all personal information, particularly anything identifying from Bluetooth devices.

26th April 2006

The last two meetings about the Curious Room Project have focused on taking stock of our Curious Information Display prototype and trying to work out how to improve it.

One key problem is that clustering similar observations together tends to make the presence of users disappear. For instance, an observed state like:


is so similar to an observed state like:


(where the difference between the two states is that a user has stepped on one of the pressure pads) that all observations of this type will tend to be clustered together and end up not being distant enough from that cluster's neuron to be uninteresting. This is undesirable because from an application point of view we want the agent to find humans paying attention to it or walking near it to be interesting.

Because we're using a full observed state, each individual state variable only contributes slightly to the position of the observation in the state space, meaning that the contribution of a pressure pad turning on or off is drowned out. This problem is related to another phenomenon. Because at each time step the agent only makes one change to its state, which means most actions result in a state that is too similar to the previous state to be interesting. On the other hand, the agent finds collapsing nodes in the display interesting because doing so makes fairly big state changes. This means that the agent is keen on making the whole display into one big picture. This may not necessarily be a bad thing because dramatic changes may make people curious about what's going on with the display. People not interesting the agent is a problem on though.

Considering events (state changes) rather than observations (full states) could help with this problem because it reduces the size of the vector being clustered and therefore gives more weight to the pad data. So too could removing the clustering layer in the motivation component and just leaving the habituation layer. The problem with the former though is that the agent loses the ability to judge whether people are standing around looking at the display (which we assumedly would like it to be able to respond positively to) and the problem with the latter is that the agent loses the capability to generalize, which may result in too many things being interesting because no two states will be considered similar any more.

19th April 2006

I've installed a new wireless network card into SentientPC and have managed to get the pressure pads in the floor working (again). The code for inserting the pad data into the context database now doesn't hog a connection but instead closes it after inserting an entry which is good because I think the previous behaviour was causing crashes and that also puts it in line with how everything else works.

So now we have nice clean(ish) deployment machine with:
  • Nice clean java code.

  • Messy and gross C code but less so than before.

  • A wireless network card connected to the airport and hence the projectors and wireless camera.

  • A VNC server.

  • An FTP server.

  • Three IDEs: Visual Studio for C, Eclipse for Java, and MaxMSP.

The monitor from Yohaan's old prototype desk is our monitor, but it has no stand. I've stolen the video splitter and attached it so that SentientPC now also displays using the rear projector. If people steal it back we might have to buy one exclusively for the project.

I imagine presentations in the Sentient running like so:
If Joe's laptop has access to Andy's airport then he connects wirelessly to the airport network and gets the VNC client either from a CD or from the VNC website (small download).
If Joe's laptop doesn't have access to Andy's laptop he connects to the wired network through one of the spare network cables lying around like Kathryn usually does.
If Joe's laptop doesn't have a network card he puts his presentation on a USB key and transfers it to ARCHIAG and uses VNC from there.
Using VNC and/or FTP Joe transfers his presentation to SentientPC. Through VNC Joe presses the sleep button on the curious display application and then launches Powerpoint or whathaveyou. Once he's done he wakes up the application again. The key point being although he's controlling the presentation software through his own computer the software is actually running on SentientPC via VNC, which is good because it combines the convenience of being able to give a presentation from your own laptop with giving us full control over what machines are allowed to display on the projector.
Kathryn and I have been cleaning up the code for the IE's sensors / effectors and consolidating the project onto a single computer. This computer will be SentientPC, which now lives at the back of The Sentient. As far as I can tell nobody else uses it so we should have no problems with people randomly shutting stuff down and whatnot. I'm still of two minds about how we want to handle people doing presentations with the projector. They need to go through SentientPC, but it lives at the back of the room. I'm currently thinking that people can VNC into it to give their presentations. That will mean people will need to install VNC of course however. But once they've done that they'll be able to dump whatever files they want there and then run Powerpoint or whathaveyou.

I'm happy with the current state of the code now that it's a bit cleaner. Errors are handled a bit more gracefully, there's a standardized way of doing things that runs through each of the managers/commanders, it's all better documented, and things just make more sense.

10th April 2006

illykai10:01am: State of the Room
Currently these are the things that Kathryn and I have prototyped for the Curious Room:

  1. Teleo Device Monitor connected to 14 pressure pads. Why 14? We ran out of wire at that point.

  2. Projector Device Monitor and Device Commander connected to the WT610 projector (ie: the middle one).

  3. Heirarchical information display that can show blocks of colour, fixed images, images retrieved by keyword searches from the net, and network camera streams.

  4. Room Booking Device Monitor connected to the Sentient Booking System database.

  5. C-Bus Device Monitor and Device Commander currently not connected to anything, but ready to go for when we get any C-Bus devices.

With 1, 2, and 3 we can complete our implementation of the Curious Information Display.

Other than 1, 2, 4, and 5 the only thing missing from the sensor setup that I speculated about in my honours thesis is a sensor and effector system for sensing and effecting programs running on the computers in the room. My current thinking is that Girder could be used to do this, with the added bonus of also potentially being able to control any IR remote control devices in the room. I haven't spent enough time looking to know if there's a better solution though.

Another promising direction that's just opened up is the integration of Bluetooth related sensors now that Yohaan's BlipNodes have arrived in the country. At the most basic level the BlipNodes could sense the identity of room users as long as they have Bluetooth devices on them, but other interesting options include using them as effectors for the room to push out information to users or communicate with them by sending messages and possibly receive responses from them if we made the appropriate sensors. We could also use the BlipNodes to implement a reinforcement system for users to reward or punish the room. The BlipNodes can also apparently locate people in space, acting as a secondary location sensor system which would be handy since the pressure pads don't cover the entirity of The Sentient.

Currently we're at a point where there are several different pieces of software on several different machines. We need to bring bring things together somewhat so that managing it all doesn't become a big hassle. Also, having done a bunch of implementation I need to sit down and document what it is that I've done, as well as consolidating and cleaning up code, making things a bit more modular and consistent. It would be really nice to have a way to get all the necessary bits of the room running with a single button press.

Another thing that needs looking at is the fact that since we're jumping onto Andy's Airport network we're using DHCP to get IPs for the various wireless devices in the room (network cameras and projectors) which means that they're at risk of changing without us knowing, leading to irritatingly having to check and change things. We have some choices: move to a wireless network on which we can set fixed IPs, work out a way to make fixed IPs and DHCP play well together (I don't know enough about DHCP to know if this can be done), or come up with a scripting solution that lets us automatically discover the IPs of those devices and any other wireless devices we want to add in.

16th March 2006


I've finished wiring up the intro module and one of the multi-IO modules. At that point I ran out of wire and had used up all of the inputs on both modules. This diagram shows how things are addressed at the moment:

Row 1 and G2 are not connected because they are behind screens. H1 and I1 are in front of the labs door, J1 and K1 are in front of the offices door. The addressing starts from 2.0 and 4.0 because those are the default addresses for an intro module and a multi-IO module. In the near future I will change this so that it makes sense and update the graphic to reflect it. This graphic also gives a better impression of the layout because it presents the pads as rectangular rather than square.

illykai10:26am: I keep wondering where I put the pressure pad diagram. The answer is here:

13th March 2006


I've just finished reading an interesting honours thesis from Stanford called "Why people hate the paperclip: Labels, appearance, behaviour, and social responses to user interface agents." The thesis examines people's reactions to the office assistant from MS office. Here are the things that I think were particularly interesting:

  • People's perceptions of the office assistant depended on whether they thought the agent was supposed to be fun or informative. The people leaning to fun were more tolerant of the agent.

  • Explicitly labeling an agent as fun and then backing that up with either a graphical representation of the agent or with some kind of fun behaviour like telling jokes correlates strongly with positive perceptions of the agent. On the other hand, labeling an agent as fun and then offering nothing to back that up correlates with dissatisfaction.

  • The study showed no significant difference in perception between a "fun" agent with a graphical representation and a "fun" agent with no graphical representation but that told jokes.

  • There is a theory, CASA (Computers as Social Actors), that claims since people tend to think of computers in human terms that appropriate interfaces for computers will encourage this anthropomorphisation. There's a lot of literature on this issue and it's a theory for agent-based UI advocates.

  • Explanations for the unpopularity of the office assistant include: an anthropomorphic agent gives users the impression that its capacities are greater than they actually are; delegation to an agent reduces people's feelings of control and self-reliance; the assistant breaks social ettiquette rules by repeating mistakes, monitoring people's work too closely, and butting in; and the behaviour of the assistant is sometimes seen as patronising.

  • People often didn't have a good idea of what the assistant was meant to do or what triggered its actions, which frustrated people.

  • In empirical trials websites with interface agents were rated as more fun and easier to use than websites without them.

I think some of these findings help out our project. We have positioned ourselves in such a way that we're claiming our agent technology can make the environment more fun and interesting. Fun and interesting seems to be something that agent technology is generally well suited for. On the other hand, the absence of an embodied agent (ie: with no explicit representation) may work against us for this. The study found that claims to fun-ness were undermined by the lack of a representation or explicitly "fun" behaviour like joke telling. If we're going to focus on fun or entertainment then the virtual world / curious critters application which makes room for anthropomorphic characters will probably be better than applications without this.

10th March 2006


Catching up on notes from our last meeting:

  • We're going with the Curious Pictures idea (now Curious Digital Media Gallery). It will take people's locations as an input. The pictures that it shows will be relevant to the KCDC's interests but not representational of the sensor input. The length of time that the pads in the floor are activated may be an input. Hopefully the low complexity of the project should keep the development time down. We'll try using motivated reinforcement learning for now, but try to keep the learning plug and play. The images will be pulled from Google via keyword searches. Actions could include the specification of tiled, stretched, or moving images. The room ought not to try to get people's attention when there's nothing going on in the room, because most likely nobody is there.

  • We're also looking into virtual worlds mirroring physical worlds. Learned behaviours in the virtual world could transfer to the physical by some approval process. Both meeting room and critter scenarios could be developed in this framework.

  • The Teleo devices, network cameras, and a subset of the Clipsal devices; specifically the dimmer, WAP, switches, and network interface; should be ordered. Yohaan will order the Blipnodes.

  • A software solution to control multiple displays will be needed. The projectors will need to be woken up and put on standby. Possibly a computer-programmable remote control would be worth buying.

  • Owen needs to work out why the rear projector isn't working at the moment.

2nd March 2006


I've installed Visual Studio .NET on SentientPC in The Sentient. It's now connected to the pressure pads. It took me an irritating amount of time to transfer the Visual C project across from Tang to the new machine and get it working. Here are some tips for next time:

Visual C will tell you that the old Teleo project is out of date. You'll have to update it which will irritating strip away a bunch of project setup stuff.

Get mysqlclient.lib from the mySQL website. You will need to compile the source of mySQL in order to get the right file, but this just involves running a batch file.

Get zlib.lib from somewhere. Preferably the zlib website. I don't remember having this problem last time. This time round I stole a static version of the library from some other program that was already installed. Probably not the safest idea since it's not the most recent version.

Get the windows SDK from somewhere. Hopefully it was already installed when you installed visual studio. The libraries you want to include from it are ws2_32.lib or wsock32.lib.

Make sure all these libraries are somewhere that linker can find. Make sure that the compiler's include path points to the teleo header files that you need too.

Use multi-threaded debug mode because that's what mySQL uses and ignore the libcmt.lib because it will cause conflicts with the multithreaded includes that are already there from mySQL's source code.

I've made a picture to summarize what all this should look like so there's less hair-tearing frustration next time:

28th February 2006

illykai4:23pm: I'm looking into the possibilities for controlling the projectors and have discovered:

  • It's not straightforward. You can't turn a projector off through your monitor cable.

  • Turning projectors off at the powerpoint without turning them of at their main switch can damage them. Therefore controlling them through a C-Bus relay is a decidedly bad idea.

There are two ways to turn the projector on using a computer. They are:

  1. Using NEC's PC Control Utility to turn the projector on or off once it's connected to a network. The catch: You can't control PC Control Utility itself via software. You have to be sitting in front of the PC running it and select "Turn projector on" from a menu.

  2. Using the built in web page that our projectors have which lets you control them. The question then is how would the agent use the web page?

One way to go about things would be to sneakily use the first option by reverse engineering PC Control Utility by sniffing the packets that it sends and then mimicking them with our own program. This would be hardcore and fun IMO. There is also what looks like a list of byte codes for controlling the projector included in the manual, but I'm not 100% sure how to use them.

As far as I know it would not be possible to fake a message from the web-based controller, but I haven't seen the source of the page so I'm not sure. I'll take in my laptop tomorrow to try it. I suspect that it's not really an option though.

A third, more general approach would be to go around the problem and give the agent an IR effector which would let it act as if it had a remote control and could therefore turn the projector on and off. This would have the side effect of letting it control any IR device that we wanted to install in the room in the future. This option also seem to be relatively cheap.

Here's the story: Proximis make a product called Girder. Girder is an automation package that allows IR devices to control your PC and also allows your PC to control IR devices. Using a device connected to a PC by either CAT-5 or USB Girder can send IR signals to a projector. Girder is pretty special in that you can write LUA scripts that allow it to receive TCP/IP connections which can request for it to activate IR devices or alternatively you can directly tell it to send a given command via command line arguments. This would make it pretty awesome for us since both control methods can be done with software only. I estimate that getting Girder set up would cost us around $500.

In short: I am going to try using my laptop to run the control software and check out the webpage to see if I spot anything that could be automated. Failing that I will try to write a simple program that opens a TCP/IP connection to the projector and sends it what I suspect are the control codes to turn on and off. Failing that I suggest we fall back on using Girder to control the projector via remote.

17th February 2006

illykai10:54am: Here is a list of the C-Bus stuff I'm asking around about with the actual product codes:


Saturn 250VA Dimmer Switch 5886D1L2AA x2
Wireless Gateway 5800WCGA
12 Channel Relay with C-Bus Power Supply L5512RVF
4 Channel Dimmer with Power Supply L5504D2A
Network Interface 5500CN
Occupancy Sensor 5751L x2
General Input Unit 5504GI (light level sensors not available so we'll have to buy one and attach it)

Then there's some other stuff which might be handy but is probably non-essential:

Hand Held Remote 5888TXBA
Plug Adaptor, 3A 5812D3L1AA (Wireless Leading Edge Dimmer)
Plug Adaptor, 10A 5812R10F1AA (Wireless Relay)

16th February 2006

illykai11:15am: Assuming that we have to buy everything (all costs in $US):

Teleo Introductory Module x1 = $159.00
Teleo Analog Input Module x7 = $149.00 x7

Total cost of Teleo Modules = $1202 + P&H

Assuming we can keep the setup we have now:

Teleo Analog Input Module x5 = $149.00 x5

Total cost of Teleo Modules = $745 + P&H

Assuming that Petra is right and we have 5 Multi-IO modules spare that we can use permanently:

Teleo Analog Input Module x2 = $149.00 x2

Total cost of Teleo Modules = $298.00 + P&H

I keep missing Andy, so I don't know whether Petra was right about the number of spare modules.

We also still need to solve the cabling problem. I want to try to pin down Phil to discuss this but he's very busy atm.

15th February 2006

illykai2:46pm: I've made an equivalent diagram for the Teleo Sensors:


I have sent off emails to seven different electrical wholesalers to get prices for the Clipsal stuff that we want to purchase. It seems that wholesalers deliberately make it hard to find the product that you want to buy through hiding their catalogues behind badly integrated e-commerce systems that require you to print out forms from the net and mail them in by snail mail.

I've got a diagram of how I imagine the Clipsal devices fitting together. I'll do a write up soon.

On the room booking website front, a development version can be seen here. Currently the deletion system doesn't work and will possibly require some reverse engineering to make it work, but I'm now much happier with the layout and the rest of the functionality.

8th February 2006


Here is a list of what I would like to buy for the curious room project:

C-Bus Wireless Wall Plate (Saturn Model) x2
C-Bus Wireless Gateway
C-Bus DIN 4 Channel Dimmer 2A Per Channel With Power x2
C-Bus Indoor PIR Movement Sensor x2
C-Bus Network Interface
C-Bus Light Level Sensor x2

Then there's some other stuff which might be handy but is probably non-essential:

C-Bus Wireless Plug Adapters x2
C-Bus Wireless Remote Control
C-Bus DIN Rail 4 Channel Voltage Free Relay With Power

And as far as Teleo modules are concerned, if we wanted to wire up the whole floor without using any of the modules that we currently have so that they can be used for classes and whatnot it would take:

Teleo Intro Module
Teleo Analog Input Module x7

3rd February 2006

illykai1:40pm: Here's an updated version of the room layout image showing the cameras and a bluetooth beacon.


Some notes that I took on yesterday's meeting:

  • We need to make a web-based system for booking The Sentient. I put my hand up for that.

  • The various learning models for the agents need a write up stating our understanding of the implications of chosing any one model over another. Kathryn is doing this.

  • Cameras and BlipNodes need to be added to the picture of the room. I'm fixing that.

  • Mary was concerned that our fairly cavalier "whack it in the database" approach to storing sensor data could prove to be a problem in the future. Other groups have invested a fair bit of time and thought in structuring their sensor data. I need to look into what temporal databases and OO databases could potentially offer us. Kathryn pointed out that in our system the burden for structuring the data falls on the consumer of the data.

  • Mary was uncomfortable with the database polling scheme in our system architecture. I'm going to do some quick investigation to find whether an SQL database can automatically notify clients when it's updated to avoid polling. Kathryn pointed out that Damian says polling is still an acceptable approach in this modern day and helps with the decoupling of subsystems in our supersystem.

  • A document needs to be created that records our current thinking on the various parts of the project and justifies the decisions that we have so far made. The learning models and architecture decision would be included in this. I can start this after the booking system.

  • Kathryn reported that the agent found playing with lights boring because all the actions involved were too similar, but patterns of behaviour did emerge due the the ideosynchrasies of the motivated learning algorithn.

  • I need to talk to Andy and Mohammad to find out where the spare Teleo sensors are.

  • I need to look into the possibility of wirelessly controlling Teleo devices so that we don't have to run Cat5 cables all over The Sentient.

  • I need to make a wish list of Clipsal devices to order now that we have a better idea of what we want.

  • I need to check whether I can get any of the old CRC computers to work with the Teleo modules.

  • Either Kathryn or I need to turn her diagrammatic sketch of the sensor subsystems into a proper diagram.

  • Mary says that we should aim to have all sensor data types that we have access to reporting into the database by the end of February.

  • I said yes we should buy BlipNet products because they do pretty much everything that we could ever want to do with Bluetooth, but the only worry is that each BlipNode apparently only supports 7 connections at once.

  • Mary is going to organize a meeting with the various people who have a stake in The Sentient to discuss what we're planning on doing with it, coordinate resources, and make sure we're not trampling on toes.

  • Mary would like an ongoing development of potential scenarios for what the curious room might do. Some starters: 1) it might assist in the use of devices 2) it might provide a visualisation of useful information 3) it might provide a novel HCI method 4) it might be a remote collaboration tool 5) it might control a (probably virtual) critter that "lives" within it 6) it might provide a visualisation that is more like a window onto the soul of the agent than targeted "useful" information.

  • Mary pointed out that we should not restrict ourselves to sensing only things that are directly contained within The Sentient, but also should considering things that people in the room might want to know about like network traffic or printer queues.

2nd February 2006


I've read through several documents on Phillips' ongoing projects. See ambient intelligence (plus more), HomeLab, and the PHENOM (Perceptive Home ENvirOnMents) project (plus more). In a nutshell, here's what's going on:

Ambient Intelligence is basically another term for Ubiquitous Computing. It covers augmenting regular everyday objects with embedded computing power including intelligent rooms, and it also covers more natural HCI with speech and gesture recognition plus computer vision. It takes the hidden computing and natural interfaces ideas of UC and throws in some AI for personalizability, natural language processing, computer vision and whatnot. Pattie Maes also runs an ambient intelligence research group at MIT's media lab. Phillips is a partner in MIT's Project Oxygen, which was formerly the Intelligent Room Project. HomeLab is an HCI laboratory and technology showcase that Phillips is using to do its research.

Phillips' vision for PHENOM, their IE project, is that it should be as if a butler is embedded in the house. This certainly suggests an agent metaphor. For some reason though, their first application was more of a peripheral application for browsing user photos, but it included some core ideas from other IE groups like display contexts that move with the user so you can choose to display the photos on whatever screen happens to be nearby and it also uses RFID tags for identifying people, objects, and their locations to help build a context model.

Examples of projects that Phillips is undertaking include:
  • Sepia: A system using touch screens, RFID tags, and mobile devices to manage photo albums.

  • Spaces: Virtual places that allow people to stay in touch with each other and share files, photos, etc. They would be accessible through devices around the home as well as via mobile devices and over the internet.

  • Using projected / displayed silhouettes to give the impression of co-presence with remote people when watching TV.

  • Living Light: An ambient coloured lighting system that responds to pre-programmed cues in film and music.

Phillips position themselves in an interesting way by referencing a controversial idea in economics: The Experience Economy. The basic idea is that if you are participating in the experience economy you are trying to sell people a way of feeling, rather than a physical product or a service. Disney's Disneyland would be a typical example of this idea. This idea could be stretched to cover our own work by saying that it's the experience of interacting with the curious room that the value of the room resides in. I think there might be something in this, but the idea isn't well developed. Detractors of The Experience Economy believe that it flies in the face of regular economic assumptions which claim economic success is driven by making more efficient use of scarce resources and excitement over the idea has dulled somewhat in the wake of the tech bubble popping in the early noughties.

1st February 2006

illykai9:24am: Phil Granger sent me a picture showing the layout of the pressure pads beneath the carpet of The Sentient:

He also sent me an Excel spreadsheet giving the addresses of the various pads in the layout. This explains why the A row of pads does not start from 1. I have no idea where rows H through K are. Hopefully my meeting with him today will make that clear.

31st January 2006


Okay so I've just finished coding a relatively simple program that writes out the data coming in to it from the pressure pads in the floor of The Sentient straight into a table in our MySQL database. There were some things that were tricky about this:

The program is based off the TeleoModuleManager example code that's provided with Teleo's C API which I linked in a previous post. Kathryn and I were having a lot of trouble with it until we realised that it takes a command line argument for which connection it's using. In our case this is COM6. Attaching the USB connector that runs between the Teleo modules and Tang to a different USB port from its current one may cause the Teleo modules to be undetectable to Tang, though this is untested. The reason is that the USB to serial driver that powers the connector is buggy and sometimes only works for the USB port to which the devices was attached when the driver was installed.

Once the TeleoModuleManager was running I had difficulties getting the MySQL libraries to be recognised properly by Visual C. There were two issues here: There was some conflict between libraries, possibly to do with the inclusion of multi and single threading libraries in the same project. The result was a whole lot of complaints over redefined function names. Setting the offending library to be ignored by the project's linker made these errors disappear, but I don't know enough about C to know if doing this will shoot us in the foot in the longer term. The other issue is that one needs to make sure that the various libraries are linked properly, specifically the winsock and mysqlclient libraries. Winsock is needed for handling the sockets that MySQL uses to connect to the database. The other library handles all of the MySQL logic.

String manipulation is really irritating in C, but I discovered sprintf(), which is a string manipulator's best friend. There is, afaik, no simple way to convert floats to strings other than using this function. The only problem is that it's dangerous because there are no boundary checks so the string could overflow out of the allocated memory and cause craziness. If this format of our data changes this could cause strange errors, but for now this is a prototype and it should be okay.

I anticipate that the pressure pads will generate a LOT of data very quickly just from our experiments with wandering around on the floor. Exactly how much a LOT is remains to be seen. It may become a problem when the rest of the floor sensors are hooked up.

The last thing is that it seems like I can't have the program running in the background on Tang because Andy has disabled the option for user switching, so if someone else wanted to log on they'd have to log me off and that would stop the program from running. We could have a dedicated machine for runnning the program though.

27th January 2006

illykai1:15pm: Teleo have recently released an ANSI C SDK for their modules. This is kinda neat because it means we wouldn't have to stuff around with Max/MSP. The question is whether I can actually get it to work...

25th January 2006

illykai4:21pm: Combining the layer diagram and the room diagram as in Kathryn's original, here is the result:

illykai2:05pm: Kathryn created some system architecture diagrams that will be useful for directing development and also for publications. The originals were in WordArt format, so I have turned them into JPGs using Adobe Illustrator. I'm not all that good with Illustrator so it took quite a while and actually looks somewhat worse than the originals did.

First we have agent models.

Reinforcement learning agents and motivated reinforcement learning agents.

Supervised learning agents and motivated supervised learning agents.

Next we have a diagram of the sensors and effectors in the curious room:

An outline of the system structure as it stands at the moment:

And finally a diagram of the layers of the system architecture.

illykai1:51pm: Friday's meeting with Mary didn't follow a strict agenda, but the essential parts were:
  • Kathryn has joined the team. Her work on motivated agents will be brought to bear on the project by developing a motivated agent that can sense and affect the world using the prototype system that we develop and through considering the role of motivated agents in intelligent rooms in a broader context.

  • Kathryn and Owen's contracts had problems that needed to be worked out with personnel.

  • A prototype system has been made which sends commands to a clipsal dimmer and senses activity on the clipsal system, storing the sensed information in a MySQL database.

  • We need to develop an understanding of the conceptual framework within which the curious room would rest. It is not device integration, it is not a room OS, it is not necessarily an agent making using a conference room easier. What is it? Owen suggested looking at environments whose novelty is their actual value, such as Videoplace. Mary had earlier suggested that the curious room could be an entirely new kind of environment.

  • Johaan and several others interested in blue tooth technology are going to be working along with us on implementing blue tooth sensors.

  • Putting off creating a diagram of the system is fine until we have decided more firmly on what it will be like
Powered by LiveJournal.com