Search results for: brain - computer interface (BCI)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2072

Search results for: brain - computer interface (BCI)

32 Evaluating Complexity – Ethical Challenges in Computational Design Processes

Authors: J.Partanen

Abstract:

Complexity, as a theoretical background has made it easier to understand and explain the features and dynamic behavior of various complex systems. As the common theoretical background has confirmed, borrowing the terminology for design from the natural sciences has helped to control and understand urban complexity. Phenomena like self-organization, evolution and adaptation are appropriate to describe the formerly inaccessible characteristics of the complex environment in unpredictable bottomup systems. Increased computing capacity has been a key element in capturing the chaotic nature of these systems. A paradigm shift in urban planning and architectural design has forced us to give up the illusion of total control in urban environment, and consequently to seek for novel methods for steering the development. New methods using dynamic modeling have offered a real option for more thorough understanding of complexity and urban processes. At best new approaches may renew the design processes so that we get a better grip on the complex world via more flexible processes, support urban environmental diversity and respond to our needs beyond basic welfare by liberating ourselves from the standardized minimalism. A complex system and its features are as such beyond human ethics. Self-organization or evolution is either good or bad. Their mechanisms are by nature devoid of reason. They are common in urban dynamics in both natural processes and gas. They are features of a complex system, and they cannot be prevented. Yet their dynamics can be studied and supported. The paradigm of complexity and new design approaches has been criticized for a lack of humanity and morality, but the ethical implications of scientific or computational design processes have not been much discussed. It is important to distinguish the (unexciting) ethics of the theory and tools from the ethics of computer aided processes based on ethical decisions. Urban planning and architecture cannot be based on the survival of the fittest; however, the natural dynamics of the system cannot be impeded on grounds of being “non-human". In this paper the ethical challenges of using the dynamic models are contemplated in light of a few examples of new architecture and dynamic urban models and literature. It is suggested that ethical challenges in computational design processes could be reframed under the concepts of responsibility and transparency.

Keywords: urban planning, architecture, dynamic modeling, ethics, complexity theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1848
31 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep

Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk

Abstract:

The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.

Keywords: Autochthonous Miocene, Carpathian Foredeep, Poland, shale gas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 686
30 Body Mass Index and Dietary Habits among Nursing College Students Living in the University Residence in Kirkuk City, Iraq

Authors: Jenan Shakoor

Abstract:

Obesity prevalence is increasing worldwide. University life is a challenging period especially for students who have to leave their familiar surroundings and settle in a new environment. The current study aimed to assess the diet and exercise habits and their association with body mass index (BMI) among nursing college students living at Kirkuk University residence. This was a descriptive study. A non-probability (purposive) sample of 101 students living in Kirkuk University residence was recruited during the period from the 15th November 2015 to the 5th May 2016. A questionnaire was constructed for the purpose of the study which consisted of four parts: the demographic characteristics of the study sample, eating habits, eating at college and healthy habits. The data were collected by interviewing the study sample and the weight and height were measured by a trained researcher at the college. Descriptive statistical analysis was undertaken. Data were prepared, organized and entered into the computer file; the Statistical Package for Social Science (SPSS 20) was used for data analysis. A p value≤ 0.05 was accepted as statistical significant. A total of 63 (62.4%) of the sample were aged20-21with a mean age of 22.1 (SD±0.653). A third of the sample 38 (37.6%) were from level four at college, 67 (66.3%) were female and 46 45.5% of participants were from a middle socio-economic status. 14 (13.9%) of the study sample were overweight (BMI =25-29.9kg/m2) and 6 (5.9%) were obese (BMI≥30kg/m2) compared to 73 (72.3%) were of normal weight (BMI =18.5-24.9kg/m2). With regard to eating habits and exercise, 42 (41.6%) of the students rarely ate breakfast, 79 (78.2%) eat lunch at university residence, 77 (78.2%) of the students reported rarely doing exercise and 62 (61.4%) of them were sleeping for less than eight hours. No significant association was found between the variables age, sex, level of college and socio-economic status and BMI, while there was a significant association between eating lunch at university and BMI (p =0.03). No significant association was found between eating habits, healthy habits and BMI. The prevalence of overweight and obesity among the study sample was 19.8% with female students being more obese than males. Further studies are needed to identify BMI among residence students in other colleges and increasing the awareness of undergraduate students to healthy food habits.

Keywords: Body mass index, diet, obesity, university residence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1224
29 Application of Transportation Models for Analysing Future Intercity and Intracity Travel Patterns in Kuwait

Authors: Srikanth Pandurangi, Basheer Mohammed, Nezar Al Sayegh

Abstract:

In order to meet the increasing demand for housing care for Kuwaiti citizens, the government authorities in Kuwait are undertaking a series of projects in the form of new large cities, outside the current urban area. Al Mutlaa City located to the north-west of the Kuwait Metropolitan Area is one such project out of the 15 planned new cities. The city accommodates a wide variety of residential developments, employment opportunities, commercial, recreational, health care and institutional uses. This paper examines the application of comprehensive transportation demand modeling works undertaken in VISUM platform to understand the future intracity and intercity travel distribution patterns in Kuwait. The scope of models developed varied in levels of detail: strategic model update, sub-area models representing future demand of Al Mutlaa City, sub-area models built to estimate the demand in the residential neighborhoods of the city. This paper aims at offering model update framework that facilitates easy integration between sub-area models and strategic national models for unified traffic forecasts. This paper presents the transportation demand modeling results utilized in informing the planning of multi-modal transportation system for Al Mutlaa City. This paper also presents the household survey data collection efforts undertaken using GPS devices (first time in Kuwait) and notebook computer based digital survey forms for interviewing representative sample of citizens and residents. The survey results formed the basis of estimating trip generation rates and trip distribution coefficients used in the strategic base year model calibration and validation process.

Keywords: GPS based household surveys, transportation infrastructure, origin-destination trip matrices, traffic forecasts, transportation demand modeling, travel behavior patterns.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
28 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1101
27 Education and Assessment of Civil Employees in e-Government: The Case of a Moodle Based Platform

Authors: Stamatios A. Theocharis, George A. Tsihrintzis

Abstract:

One of the most important factors for the success of e-government is training and preparing the workforce of the public sector. As changes and innovation in the public sector progress at a very slow pace and more slowly than in the private sector, issues related to human resources require special care. This is because the workforce will eventually seize the opportunities of the technological solutions used in e-Government. Thus, the central administration should provide employees with continuous and focused training not only on new technologies but also on a wide range of subjects and also improve interdepartmental interaction.

To achieve all this, new methods and training tools need to be implemented in addition to assessment of the employees. In this spirit, we propose the development of an educational platform with user personalization features. We propose the development of this platform using Moodle as the basic tool. Incorporating a personalization mechanism is very important since different employees have different backgrounds, education levels, computer skills, or different capability to develop further. Key features of the proposed platform include, besides typical e-learning tools, communities organized in order to exchange experiences and knowledge, groups of users based on certain criteria, automatic evaluation of users and potential self-education and self-assessment. In its fully developed form, this platform can be part of a more comprehensive knowledge management system for the public sector.

Keywords: e-Government, civil employees education, education technologies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1884
26 Nonlinear Transformation of Laser Generated Ultrasonic Pulses in Geomaterials

Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas

Abstract:

Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus “GEOSCAN-02M”. Ultrasonic pulses are excited by the pulses of Qswitched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach- Mayergoyz and can be used for the location of cracks in the optically opaque materials.

Keywords: Cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845
25 Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines

Authors: Silvia Santano Guillén, Luigi Lo Iacono, Christian Meder

Abstract:

One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions.

Keywords: Affective computing, emotion recognition, humanoid robot, Human-Robot-Interaction (HRI), social robots.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1297
24 The Enhancement of Target Localization Using Ship-Borne Electro-Optical Stabilized Platform

Authors: Jaehoon Ha, Byungmo Kang, Kilho Hong, Jungsoo Park

Abstract:

Electro-optical (EO) stabilized platforms have been widely used for surveillance and reconnaissance on various types of vehicles, from surface ships to unmanned air vehicles (UAVs). EO stabilized platforms usually consist of an assembly of structure, bearings, and motors called gimbals in which a gyroscope is installed. EO elements such as a CCD camera and IR camera, are mounted to a gimbal, which has a range of motion in elevation and azimuth and can designate and track a target. In addition, a laser range finder (LRF) can be added to the gimbal in order to acquire the precise slant range from the platform to the target. Recently, a versatile functionality of target localization is needed in order to cooperate with the weapon systems that are mounted on the same platform. The target information, such as its location or velocity, needed to be more accurate. The accuracy of the target information depends on diverse component errors and alignment errors of each component. Specially, the type of moving platform can affect the accuracy of the target information. In the case of flying platforms, or UAVs, the target location error can be increased with altitude so it is important to measure altitude as precisely as possible. In the case of surface ships, target location error can be increased with obliqueness of the elevation angle of the gimbal since the altitude of the EO stabilized platform is supposed to be relatively low. The farther the slant ranges from the surface ship to the target, the more extreme the obliqueness of the elevation angle. This can hamper the precise acquisition of the target information. So far, there have been many studies on EO stabilized platforms of flying vehicles. However, few researchers have focused on ship-borne EO stabilized platforms of the surface ship. In this paper, we deal with a target localization method when an EO stabilized platform is located on the mast of a surface ship. Especially, we need to overcome the limitation caused by the obliqueness of the elevation angle of the gimbal. We introduce a well-known approach for target localization using Unscented Kalman Filter (UKF) and present the problem definition showing the above-mentioned limitation. Finally, we want to show the effectiveness of the approach that will be demonstrated through computer simulations.

Keywords: Target localization, ship-borne electro-optical stabilized platform, unscented Kalman filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1069
23 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control

Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak

Abstract:

With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.

Keywords: Energy-efficient buildings, Hierarchical model predictive control, Microgrid power flow optimization, Price-optimal building climate control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1473
22 Study of Variation of Winds Behavior on Micro Urban Environment with Use of Fuzzy Logic for Wind Power Generation: Case Study in the Cities of Arraial do Cabo and São Pedro da Aldeia, State of Rio de Janeiro, Brazil

Authors: Roberto Rosenhaim, Marcos Antonio Crus Moreira, Robson da Cunha, Gerson Gomes Cunha

Abstract:

This work provides details on the wind speed behavior within cities of Arraial do Cabo and São Pedro da Aldeia located in the Lakes Region of the State of Rio de Janeiro, Brazil. This region has one of the best potentials for wind power generation. In interurban layer, wind conditions are very complex and depend on physical geography, size and orientation of buildings and constructions around, population density, and land use. In the same context, the fundamental surface parameter that governs the production of flow turbulence in urban canyons is the surface roughness. Such factors can influence the potential for power generation from the wind within the cities. Moreover, the use of wind on a small scale is not fully utilized due to complexity of wind flow measurement inside the cities. It is difficult to accurately predict this type of resource. This study demonstrates how fuzzy logic can facilitate the assessment of the complexity of the wind potential inside the cities. It presents a decision support tool and its ability to deal with inaccurate information using linguistic variables created by the heuristic method. It relies on the already published studies about the variables that influence the wind speed in the urban environment. These variables were turned into the verbal expressions that are used in computer system, which facilitated the establishment of rules for fuzzy inference and integration with an application for smartphones used in the research. In the first part of the study, challenges of the sustainable development which are described are followed by incentive policies to the use of renewable energy in Brazil. The next chapter follows the study area characteristics and the concepts of fuzzy logic. Data were collected in field experiment by using qualitative and quantitative methods for assessment. As a result, a map of the various points is presented within the cities studied with its wind viability evaluated by a system of decision support using the method multivariate classification based on fuzzy logic.

Keywords: Behavior of winds, wind power, fuzzy logic, sustainable development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1053
21 Development of Moving Multifocal Electroretinogram with a Precise Perimetry Apparatus

Authors: Naoto Suzuki

Abstract:

A decline in visual sensitivity at arbitrary points on the retina can be measured using a precise perimetry apparatus along with a fundus camera. However, the retinal layer associated with this decline cannot be identified accurately with current medical technology. To investigate cryptogenic diseases, such as macular dystrophy, acute zonal occult outer retinopathy (AZOOR), and multiple evanescent white dot syndrome (MEWDS), we evaluated an electroretinogram (ERG) function that allows moving the center of the multifocal hexagonal stimulus array to a chosen position. Macular dystrophy is a generalized term used for a variety of functional disorders of the macula lutea, and the ERG shows a diminution of the b-wave in these disorders. AZOOR causes an acute functional disorder to an outer layer of the retina, and the ERG shows a-wave and b-wave amplitude reduction as well as delayed 30 Hz flicker responses. MEWDS causes acute visual loss and the ERG shows a decrease in a-wave amplitude. We combined an electroretinographic optical system and a perimetric optical system into an experimental apparatus that has the same optical system as that of a fundus camera. We also deployed an EO-50231 Edmund infrared camera, a 45-degree cold mirror, a lens with a 25-mm focal length, a halogen lamp, and an 8-inch monitor. Then, we also employed a differential amplifier with gain 10, a 50 Hz notch filter, a high-pass filter with a 21.2 Hz cut-off frequency, and two non-inverting amplifiers with gains 1001 and 11. In addition, we used a USB-6216 National Instruments I/O device, a NE-113A Nihon Kohden plate electrode, a SCB-68A shielded connector block, and LabVIEW 2017 software for data retrieval. The software was used to generate the multifocal hexagonal stimulus array on the computer monitor with C++Builder 10.2 and to move the center of the array toward the left and right and up and down. Cone and bright flash ERG results were observed using the moving ERG function. The a-wave, b-wave, c-wave, and the photopic negative response were identified with cone ERG. The moving ERG function allowed the identification of the retinal layer causing visual alterations.

Keywords: Moving ERG, multifocal ERG, precise perimetry, retinal layers, visual sensitivity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 546
20 Optical Verification of an Ophthalmological Examination Apparatus Employing the Electroretinogram Function on Fundus-Related Perimetry

Authors: Naoto Suzuki

Abstract:

Japanese are affected by the most common causes of eyesight loss such as glaucoma, diabetic retinopathy, pigmentary retinal degeneration, and age-related macular degeneration. We developed an ophthalmological examination apparatus with a fundus camera, precisely fundus-related perimetry (microperimetry), and electroretinogram (ERG) functions to diagnose a variety of diseases that cause eyesight loss. The experimental apparatus was constructed with the same optical system as a fundus camera. The microperimetry optical system was calculated and added to the experimental apparatus using the German company Optenso's optical engineering software (OpTaliX-LT 10.8). We also added an Edmund infrared camera (EO-0413), a lens with a 25 mm focal length, a 45° cold mirror, a 12 V/50 W halogen lamp, and an 8-inch monitor. We made the artificial eye of a plane-convex lens, a black spacer, and a hemispherical cup. The hemispherical cup had a small section of the paper at the bottom. The artificial eye was photographed five times using the experimental apparatus. The software was created to display the examination target on the monitor and save examination data using C++Builder 10.2. The retinal fundus was displayed on the monitor at a length and width of 1 mm and a resolution of 70.4 ± 4.1 and 74.7 ± 6.8 pixels, respectively. The microperimetry and ERG functions were successfully added to the experimental ophthalmological apparatus. A moving machine was developed to measure the artificial eye's movement. The artificial eye's rear part was painted black and white in the central area. It was rotated 10 degrees from one side to the other. The movement was captured five times as motion videos. Three static images were extracted from one of the motion videos captured. The images display the artificial eye facing the center, right, and left directions. The three images were processed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2, including trimming, binarization, making a window, deleting peripheral area, and morphological operations. To calculate the artificial eye's fundus center, we added a gravity method to the program to calculate the gravity position of connected components. From the three images, the image processing could calculate the center position.

Keywords: Ophthalmological examination apparatus, microperimetry, electroretinogram, eye movement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 477
19 Trimmed Mean as an Adaptive Robust Estimator of a Location Parameter for Weibull Distribution

Authors: Carolina B. Baguio

Abstract:

One of the purposes of the robust method of estimation is to reduce the influence of outliers in the data, on the estimates. The outliers arise from gross errors or contamination from distributions with long tails. The trimmed mean is a robust estimate. This means that it is not sensitive to violation of distributional assumptions of the data. It is called an adaptive estimate when the trimming proportion is determined from the data rather than being fixed a “priori-. The main objective of this study is to find out the robustness properties of the adaptive trimmed means in terms of efficiency, high breakdown point and influence function. Specifically, it seeks to find out the magnitude of the trimming proportion of the adaptive trimmed mean which will yield efficient and robust estimates of the parameter for data which follow a modified Weibull distribution with parameter λ = 1/2 , where the trimming proportion is determined by a ratio of two trimmed means defined as the tail length. Secondly, the asymptotic properties of the tail length and the trimmed means are also investigated. Finally, a comparison is made on the efficiency of the adaptive trimmed means in terms of the standard deviation for the trimming proportions and when these were fixed a “priori". The asymptotic tail lengths defined as the ratio of two trimmed means and the asymptotic variances were computed by using the formulas derived. While the values of the standard deviations for the derived tail lengths for data of size 40 simulated from a Weibull distribution were computed for 100 iterations using a computer program written in Pascal language. The findings of the study revealed that the tail lengths of the Weibull distribution increase in magnitudes as the trimming proportions increase, the measure of the tail length and the adaptive trimmed mean are asymptotically independent as the number of observations n becomes very large or approaching infinity, the tail length is asymptotically distributed as the ratio of two independent normal random variables, and the asymptotic variances decrease as the trimming proportions increase. The simulation study revealed empirically that the standard error of the adaptive trimmed mean using the ratio of tail lengths is relatively smaller for different values of trimming proportions than its counterpart when the trimming proportions were fixed a 'priori'.

Keywords: Adaptive robust estimate, asymptotic efficiency, breakdown point, influence function, L-estimates, location parameter, tail length, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2032
18 An Induction Motor Drive System with Intelligent Supervisory Control for Water Networks Including Storage Tank

Authors: O. S. Ebrahim, K. O. Shawky, M. A. Badr, P. K. Jain

Abstract:

This paper describes an efficient; low-cost; high-availability; induction motor (IM) drive system with intelligent supervisory control for water distribution networks including storage tank. To increase the operational efficiency and reduce cost, the IM drive system includes main pumping unit and an auxiliary voltage source inverter (VSI) fed unit. The main unit comprises smart star/delta starter, regenerative fluid clutch, switched VAR compensator, and hysteresis liquid-level controller. Three-state energy saving mode (ESM) is defined at no-load and a logic algorithm is developed for best energetic cost reduction. To reduce voltage sag, the supervisory controller operates the switched VAR compensator upon motor starting. To provide smart star/delta starter at low cost, a method based on current sensing is developed for interlocking, malfunction detection, and life–cycles counting and used to synthesize an improved fuzzy logic (FL) based availability assessment scheme. Furthermore, a recurrent neural network (RNN) full state estimator is proposed to provide sensor fault-tolerant algorithm for the feedback control. The auxiliary unit is working at low flow rates and improves the system efficiency and flexibility for distributed generation during islanding mode. Compared with doubly-fed IM, the proposed one ensures 30% working throughput under main motor/pump fault conditions, higher efficiency, and marginal cost difference. This is critically important in case of water networks. Theoretical analysis, computer simulations, cost study, as well as efficiency evaluation, using timely cascaded energy-conservative systems, are performed on IM experimental setup to demonstrate the validity and effectiveness of the proposed drive and control.

Keywords: Artificial Neural Network, ANN, Availability Assessment, Cloud Computing, Energy Saving, Induction Machine, IM, Supervisory Control, Fuzzy Logic, FL, Pumped Storage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 510
17 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: Input performance, mobile device, slim keyboard, tactile feedback.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1528
16 Development of a Basic Robot System for Medical and Nursing Care for Patients with Glaucoma

Authors: Naoto Suzuki

Abstract:

Medical methods to completely treat glaucoma are yet to be developed. Therefore, ophthalmologists manage patients mainly to delay disease progression. Patients with glaucoma are mainly elderly individuals. In elderly people's houses, having an equipment that can provide medical treatment and care can release their family from their care. For elderly people with the glaucoma to live by themselves as much as possible, we developed a support robot having five functions: elderly people care, ophthalmological examination, trip assistance to the neighborhood, medical treatment, and data referral to a hospital. The medical and nursing care robot should approach the visual field that the patients can see at a speed suitable for their eyesight. This is because the robot will be dangerous if it approaches the patients from the visual field that they cannot see. We experimentally developed a robot that brings a white cane to elderly people with glaucoma. The base part of the robot is a carriage, which is a Megarover 1.1, and it has two infrared sensors. The robot moves along a white line on the floor using the infrared sensors and has a special arm, which does not use electricity. The arm can scoop the block attached to the white cane. Next, we also developed a direction detector comprised of a charge-coupled device camera (SVR41ResucueHD; Sun Mechatronics), goggles (MG-277MLF; Midori Anzen Co. Ltd.), and biconvex lenses with a focal length of 25 mm (Edmund Co.). Some young people were photographed using the direction detector, which was put on their faces. Image processing was performed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2. To measure the people's line of vision, we calculated the iris's center of gravity using five processes: reduction, trimming, binarization or gray scale, edge extraction, and Hough transform. We compared the binarization and gray scale processes in image processing. The binarization process was better than the gray scale process. For edge extraction, we compared five methods: Sobel, Prewitt, Laplacian of Gaussian, fast Fourier transform, and Canny. The Canny method was the optimal extraction method. We performed the Hough transform to search for the main coordinates from the iris's edge, and we found that the Hough transform could calculate the center point of the iris.

Keywords: Glaucoma, support robot, elderly people, Hough transform, direction detector, line of vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 475
15 Exercise and Cognitive Function: Time Course of the Effects

Authors: Simon B. Cooper, Stephan Bandelow, Maria L. Nute, John G. Morris, Mary E. Nevill

Abstract:

Previous research has indicated a variable effect of exercise on adolescents’ cognitive function. However, comparisons between studies are difficult to make due to differences in: the mode, intensity and duration of exercise employed; the components of cognitive function measured (and the tests used to assess them); and the timing of the cognitive function tests in relation to the exercise. Therefore, the aim of the present study was to assess the time course (10 and 60min post-exercise) of the effects of 15min intermittent exercise on cognitive function in adolescents. 45 adolescents were recruited to participate in the study and completed two main trials (exercise and resting) in a counterbalanced crossover design. Participants completed 15min of intermittent exercise (in cycles of 1 min exercise, 30s rest). A battery of computer based cognitive function tests (Stroop test, Sternberg paradigm and visual search test) were completed 30 min pre- and 10 and 60min post-exercise (to assess attention, working memory and perception respectively).The findings of the present study indicate that on the baseline level of the Stroop test, 10min following exercise response times were slower than at any other time point on either trial (trial by session time interaction, p = 0.0308). However, this slowing of responses also tended to produce enhanced accuracy 10min post-exercise on the baseline level of the Stroop test (trial by session time interaction, p = 0.0780). Similarly, on the complex level of the visual search test there was a slowing of response times 10 min post-exercise (trial by session time interaction, p = 0.0199). However, this was not coupled with an improvement in accuracy (trial by session time interaction, p = 0.2349). The mid-morning bout of exercise did not affect response times or accuracy across the morning on the Sternberg paradigm. In conclusion, the findings of the present study suggest an equivocal effect of exercise on adolescents' cognitive function. The mid-morning bout of exercise appears to cause a speed-accuracy trade off immediately following exercise on the Stroop test (participants become slower but more accurate), whilst slowing response times on the visual search test and having no effect on performance on the Sternberg paradigm. Furthermore, this work highlights the importance of the timing of the cognitive function tests relative to the exercise and the components of cognitive function examined in future studies. 

Keywords: Adolescents, cognitive function, exercise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3093
14 Engineering Topology of Construction Ecology for Dynamic Integration of Sustainability Outcomes to Functions in Urban Environments: Spatial Modeling

Authors: Moustafa Osman Mohammed

Abstract:

Integration sustainability outcomes give attention to construction ecology in the design review of urban environments to comply with Earth’s System that is composed of integral parts of the (i.e., physical, chemical and biological components). Naturally, exchange patterns of industrial ecology have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. When engineering topology is affecting internal and external processes in system networks, it postulated the valence of the first-level spatial outcome (i.e., project compatibility success). These instrumentalities are dependent on relating the second-level outcome (i.e., participant security satisfaction). The construction ecology-based topology (i.e., as feedback energy system) flows from biotic and abiotic resources in the entire Earth’s ecosystems. These spatial outcomes are providing an innovation, as entails a wide range of interactions to state, regulate and feedback “topology” to flow as “interdisciplinary equilibrium” of ecosystems. The interrelation dynamics of ecosystems are performing a process in a certain location within an appropriate time for characterizing their unique structure in “equilibrium patterns”, such as biosphere and collecting a composite structure of many distributed feedback flows. These interdisciplinary systems regulate their dynamics within complex structures. These dynamic mechanisms of the ecosystem regulate physical and chemical properties to enable a gradual and prolonged incremental pattern to develop a stable structure. The engineering topology of construction ecology for integration sustainability outcomes offers an interesting tool for ecologists and engineers in the simulation paradigm as an initial form of development structure within compatible computer software. This approach argues from ecology, resource savings, static load design, financial other pragmatic reasons, while an artistic/architectural perspective, these are not decisive. The paper described an attempt to unify analytic and analogical spatial modeling in developing urban environments as a relational setting, using optimization software and applied as an example of integrated industrial ecology where the construction process is based on a topology optimization approach.

Keywords: Construction ecology, industrial ecology, urban topology, environmental planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 459
13 Optimization of Mechanical Properties of Alginate Hydrogel for 3D Bio-Printing Self-Standing Scaffold Architecture for Tissue Engineering Applications

Authors: Ibtisam A. Abbas Al-Darkazly

Abstract:

In this study, the mechanical properties of alginate hydrogel material for self-standing 3D scaffold architecture with proper shape fidelity are investigated. In-lab built 3D bio-printer extrusion-based technology is utilized to fabricate 3D alginate scaffold constructs. The pressure, needle speed and stage speed are varied using a computer-controlled system. The experimental result indicates that the concentration of alginate solution, calcium chloride (CaCl2) cross-linking concentration and cross-linking ratios lead to the formation of alginate hydrogel with various gelation states. Besides, the gelling conditions, such as cross-linking reaction time and temperature also have a significant effect on the mechanical properties of alginate hydrogel. Various experimental tests such as the material gelation, the material spreading and the printability test for filament collapse as well as the swelling test were conducted to evaluate the fabricated 3D scaffold constructs. The result indicates that the fabricated 3D scaffold from composition of 3.5% wt alginate solution, that is prepared in DI water and 1% wt CaCl2 solution with cross-linking ratios of 7:3 show good printability and sustain good shape fidelity for more than 20 days, compared to alginate hydrogel that is prepared in a phosphate buffered saline (PBS). The fabricated self-standing 3D scaffold constructs measured 30 mm × 30 mm and consisted of 4 layers (n = 4) show good pore geometry and clear grid structure after printing. In addition, the percentage change of swelling degree exhibits high swelling capability with respect to time. The swelling test shows that the geometry of 3D alginate-scaffold construct and of the macro-pore are rarely changed, which indicates the capability of holding the shape fidelity during the incubation period. This study demonstrated that the mechanical and physical properties of alginate hydrogel could be tuned for a 3D bio-printing extrusion-based system to fabricate self-standing 3D scaffold soft structures. This 3D bioengineered scaffold provides a natural microenvironment present in the extracellular matrix of the tissue, which could be seeded with the biological cells to generate the desired 3D live tissue model for in vitro and in vivo tissue engineering applications.

Keywords: Biomaterial, calcium chloride, 3D bio-printing, extrusion, scaffold, sodium alginate, tissue engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 688
12 Pushover Analysis of Masonry Infilled Reinforced Concrete Frames for Performance Based Design for Near Field Earthquakes

Authors: Alok Madan, Ashok Gupta, Arshad K. Hashmi

Abstract:

Non-linear dynamic time history analysis is considered as the most advanced and comprehensive analytical method for evaluating the seismic response and performance of multi-degree-of-freedom building structures under the influence of earthquake ground motions. However, effective and accurate application of the method requires the implementation of advanced hysteretic constitutive models of the various structural components including masonry infill panels. Sophisticated computational research tools that incorporate realistic hysteresis models for non-linear dynamic time-history analysis are not popular among the professional engineers as they are not only difficult to access but also complex and time-consuming to use. In addition, commercial computer programs for structural analysis and design that are acceptable to practicing engineers do not generally integrate advanced hysteretic models which can accurately simulate the hysteresis behavior of structural elements with a realistic representation of strength degradation, stiffness deterioration, energy dissipation and ‘pinching’ under cyclic load reversals in the inelastic range of behavior. In this scenario, push-over or non-linear static analysis methods have gained significant popularity, as they can be employed to assess the seismic performance of building structures while avoiding the complexities and difficulties associated with non-linear dynamic time-history analysis. “Push-over” or non-linear static analysis offers a practical and efficient alternative to non-linear dynamic time-history analysis for rationally evaluating the seismic demands. The present paper is based on the analytical investigation of the effect of distribution of masonry infill panels over the elevation of planar masonry infilled reinforced concrete [R/C] frames on the seismic demands using the capacity spectrum procedures implementing nonlinear static analysis [pushover analysis] in conjunction with the response spectrum concept. An important objective of the present study is to numerically evaluate the adequacy of the capacity spectrum method using pushover analysis for performance based design of masonry infilled R/C frames for near-field earthquake ground motions.

Keywords: Nonlinear analysis, capacity spectrum method, response spectrum, seismic demand, near-field earthquakes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2199
11 An Analysis of Gamification in the Post-Secondary Classroom

Authors: F. Saccucci

Abstract:

Gamification has now started to take root in the post-secondary classroom. Educators have learned much about gamification to date but there is still a great deal to learn. One definition of gamification is the ability to engage post-secondary students with games that are fun and correlate to class room curriculum. There is no shortage of literature illustrating the advantages of gamification in the class room. This study is an extension of similar thought as well as an extension of a previous study where in class testing proved with the used of paired T-test that gamification did significantly improve the students’ understanding of subject material. Gamification itself in the class room can range from high end computer simulated software to paper based games of which both have advantages and disadvantages. This analysis used a paper based game to highlight certain qualitative advantages of gamification. The paper based game in this analysis was inexpensive, required low preparation time for the faculty member and consumed approximately 20 minutes of class room time. Data for the study was collected through in class student feedback surveys and narrative from the faculty member moderating the game. Students were randomly selected into groups of four. Qualitative advantages identified in this analysis included: 1. Students had a chance to meet, connect and know other students. 2. Students enjoyed the gamification process given there was a sense of fun and competition. 3. The post assessment that followed the simulation game was not part of their grade calculation therefore it was an opportunity to participate in a low risk activity whereby students could subsequently self-assess their understanding of the subject material. 4. In the view of the student, content knowledge did increase after the gamification process. These qualitative advantages identified in this analysis contribute to the argument that there should be an attempt to use gamification in today’s post-secondary class room. The analysis also highlighted that eighty (80) percent of the respondents believe twenty minutes devoted to the gamification process was appropriate, however twenty (20) percentage of respondents believed that rather than scheduling a gamification process and its post quiz in the last week, a review for the final exam may have been more useful. An additional study to this hopes to determine if the scheduling of the gamification had any correlation to a percentage of the students not wanting to be engaged in the process. As well, the additional study hopes to determine at what incremental level of time invested in class room gamification produce no material incremental benefits to the student as well as determine if any correlation exist between respondents preferring not to have it at the end of the semester to students not believing the gamification process added to the increase of their curricular knowledge.

Keywords: Gamification, inexpensive, qualitative advantages, post-secondary.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
10 Artificial Neural Network Model Based Setup Period Estimation for Polymer Cutting

Authors: Zsolt János Viharos, Krisztián Balázs Kis, Imre Paniti, Gábor Belső, Péter Németh, János Farkas

Abstract:

The paper presents the results and industrial applications in the production setup period estimation based on industrial data inherited from the field of polymer cutting. The literature of polymer cutting is very limited considering the number of publications. The first polymer cutting machine is known since the second half of the 20th century; however, the production of polymer parts with this kind of technology is still a challenging research topic. The products of the applying industrial partner must met high technical requirements, as they are used in medical, measurement instrumentation and painting industry branches. Typically, 20% of these parts are new work, which means every five years almost the entire product portfolio is replaced in their low series manufacturing environment. Consequently, it requires a flexible production system, where the estimation of the frequent setup periods' lengths is one of the key success factors. In the investigation, several (input) parameters have been studied and grouped to create an adequate training information set for an artificial neural network as a base for the estimation of the individual setup periods. In the first group, product information is collected such as the product name and number of items. The second group contains material data like material type and colour. In the third group, surface quality and tolerance information are collected including the finest surface and tightest (or narrowest) tolerance. The fourth group contains the setup data like machine type and work shift. One source of these parameters is the Manufacturing Execution System (MES) but some data were also collected from Computer Aided Design (CAD) drawings. The number of the applied tools is one of the key factors on which the industrial partners’ estimations were based previously. The artificial neural network model was trained on several thousands of real industrial data. The mean estimation accuracy of the setup periods' lengths was improved by 30%, and in the same time the deviation of the prognosis was also improved by 50%. Furthermore, an investigation on the mentioned parameter groups considering the manufacturing order was also researched. The paper also highlights the manufacturing introduction experiences and further improvements of the proposed methods, both on the shop floor and on the quotation preparation fields. Every week more than 100 real industrial setup events are given and the related data are collected.

Keywords: Artificial neural network, low series manufacturing, polymer cutting, setup period estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 932
9 The Development and Testing of a Small Scale Dry Electrostatic Precipitator for the Removal of Particulate Matter

Authors: Derek Wardle, Tarik Al-Shemmeri, Neil Packer

Abstract:

This paper presents a small tube/wire type electrostatic precipitator (ESP). In the ESPs present form, particle charging and collecting voltages and airflow rates were individually varied throughout 200 ambient temperature test runs ranging from 10 to 30 kV in increments on 5 kV and 0.5 m/s to 1.5 m/s, respectively. It was repeatedly observed that, at input air velocities of between 0.5 and 0.9 m/s and voltage settings of 20 kV to 30 kV, the collection efficiency remained above 95%. The outcomes of preliminary tests at combustion flue temperatures are, at present, inconclusive although indications are that there is little or no drop in comparable performance during ideal test conditions. A limited set of similar tests was carried out during which the collecting electrode was grounded, having been disconnected from the static generator. The collecting efficiency fell significantly, and for that reason, this approach was not pursued further. The collecting efficiencies during ambient temperature tests were determined by mass balance between incoming and outgoing dry PM. The efficiencies of combustion temperature runs are determined by analysing the difference in opacity of the flue gas at inlet and outlet compared to a reference light source. In addition, an array of Leit tabs (carbon coated, electrically conductive adhesive discs) was placed at inlet and outlet for a number of four-day continuous ambient temperature runs. Analysis of the discs’ contamination was carried out using scanning electron microscopy and ImageJ computer software that confirmed collection efficiencies of over 99% which gave unequivocal support to all the previous tests. The average efficiency for these runs was 99.409%. Emissions collected from a woody biomass combustion unit, classified to a diameter of 100 µm, were used in all ambient temperature trials test runs apart from two which collected airborne dust from within the laboratory. Sawdust and wood pellets were chosen for laboratory and field combustion trials. Video recordings were made of three ambient temperature test runs in which the smoke from a wood smoke generator was drawn through the precipitator. Although these runs were visual indicators only, with no objective other than to display, they provided a strong argument for the device’s claimed efficiency, as no emissions were visible at exit when energised.  The theoretical performance of ESPs, when applied to the geometry and configuration of the tested model, was compared to the actual performance and was shown to be in good agreement with it.

Keywords: Electrostatic precipitators, air quality, particulates emissions, electron microscopy, ImageJ.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1092
8 Beginning Physics Experiments Class Using Multi Media in National University of Laos

Authors: T. Nagata, S. Xaphakdy, P. Souvannavong, P. Chanthamaly, K. Sithavong, C. H. Lee, S. Phommathat, V. Srithilat, P. Sengdala, B. Phetarnousone, B. Siharath, X. Chemcheng, T. Yamaguchi, A. Suenaga, S. Kashima

Abstract:

National University of Laos (NUOL) requested Japan International Cooperation Agency (JICA) volunteers to begin a physics experiments class using multi media. However, there are issues. NUOL had no physics experiment class, no space for physics experiments, experiment materials were not used for many years and were scattered in various places, and there is no projector and laptop computer in the unit. This raised the question: How do authors begin the physics experiments class using multimedia? To solve this problem, the JICA took some steps, took stock of what was available and reviewed the syllabus. The JICA then revised the experiment materials to assess what was available and then developed textbooks for experiments using them; however, the question remained, what about the multimedia component of the course? Next, the JICA reviewed Physics teacher Pavy Souvannavong’s YouTube channel, where he and his students upload video reports of their physics classes at NUOL using their smartphones. While they use multi-media, almost all the videos recorded were of class presentations. To improve the multimedia style, authors edited the videos in the style of another YouTube channel, “Science for Lao,” which is a science education group made up of Japan Overseas Cooperation Volunteers (JOCV) in Laos. They created the channel to enhance science education in Laos, and hold regular monthly meetings in the capital, Vientiane, and at teacher training colleges in the country. They edit the video clips in three parts, which are the materials and procedures part including pictures, practice footage of the experiment part, and then the result and conclusion part. Then students perform experiments and prepare for presentation by following the videos. The revised experiment presentation reports use PowerPoint presentations, material pictures and experiment video clips. As for providing textbooks and submitting reports, the students use the e-Learning system of “Moodle” of the Information Technology Center in Dongdok campus of NUOL. The Korean International Cooperation Agency (KOICA) donated those facilities. The authors have passed the process of the revised materials, developed textbooks, the PowerPoint slides presented by students, downloaded textbooks and uploaded reports, to begin the physics experiments class using multimedia. This is the practice research report for beginning a physics experiments class using multimedia in the physics unit at the Department of Natural Science, Faculty of Education, at the NUOL.

Keywords: NUOL, JICA, KOICA, Physics experiment materials, smart phone, Moodle, IT center, Science for Lao.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
7 Voyage Analysis of a Marine Gas Turbine Engine Installed to Power and Propel an Ocean-Going Cruise Ship

Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris

Abstract:

A gas turbine-powered cruise Liner is scheduled to transport pilgrim passengers from Lagos-Nigeria to the Islamic port city of Jeddah in Saudi Arabia. Since the gas turbine is an air breathing machine, changes in the density and/or mass flow at the compressor inlet due to an encounter with variations in weather conditions induce negative effects on the performance of the power plant during the voyage. In practice, all deviations from the reference atmospheric conditions of 15 oC and 1.103 bar tend to affect the power output and other thermodynamic parameters of the gas turbine cycle. Therefore, this paper seeks to evaluate how a simple cycle marine gas turbine power plant would react under a variety of scenarios that may be encountered during a voyage as the ship sails across the Atlantic Ocean and the Mediterranean Sea before arriving at its designated port of discharge. It is also an assessment that focuses on the effect of varying aerodynamic and hydrodynamic conditions which deteriorate the efficient operation of the propulsion system due to an increase in resistance that results from some projected levels of the ship hull fouling. The investigated passenger ship is designed to run at a service speed of 22 knots and cover a distance of 5787 nautical miles. The performance evaluation consists of three separate voyages that cover a variety of weather conditions in winter, spring and summer seasons. Real-time daily temperatures and the sea states for the selected transit route were obtained and used to simulate the voyage under the aforementioned operating conditions. Changes in engine firing temperature, power output as well as the total fuel consumed per voyage including other performance variables were separately predicted under both calm and adverse weather conditions. The collated data were obtained online from the UK Meteorological Office as well as the UK Hydrographic Office websites, while adopting the Beaufort scale for determining the magnitude of sea waves resulting from rough weather situations. The simulation of the gas turbine performance and voyage analysis was effected through the use of an integrated Cranfield-University-developed computer code known as ‘Turbomatch’ and ‘Poseidon’. It is a project that is aimed at developing a method for predicting the off design behavior of the marine gas turbine when installed and operated as the main prime mover for both propulsion and powering of all other auxiliary services onboard a passenger cruise liner. Furthermore, it is a techno-economic and environmental assessment that seeks to enable the forecast of the marine gas turbine part and full load performance as it relates to the fuel requirement for a complete voyage.

Keywords:

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806
6 Modelling of Groundwater Resources for Al-Najaf City, Iraq

Authors: Hayder H. Kareem, Shunqi Pan

Abstract:

Groundwater is a vital water resource in many areas in the world, particularly in the Middle-East region where the water resources become scarce and depleting. Sustainable management and planning of the groundwater resources become essential and urgent given the impact of the global climate change. In the recent years, numerical models have been widely used to predict the flow pattern and assess the water resources security, as well as the groundwater quality affected by the contaminants transported. In this study, MODFLOW is used to study the current status of groundwater resources and the risk of water resource security in the region centred at Al-Najaf City, which is located in the mid-west of Iraq and adjacent to the Euphrates River. In this study, a conceptual model is built using the geologic and hydrogeologic collected for the region, together with the Digital Elevation Model (DEM) data obtained from the "Global Land Cover Facility" (GLCF) and "United State Geological Survey" (USGS) for the study area. The computer model is also implemented with the distributions of 69 wells in the area with the steady pro-defined hydraulic head along its boundaries. The model is then applied with the recharge rate (from precipitation) of 7.55 mm/year, given from the analysis of the field data in the study area for the period of 1980-2014. The hydraulic conductivity from the measurements at the locations of wells is interpolated for model use. The model is calibrated with the measured hydraulic heads at the locations of 50 of 69 wells in the domain and results show a good agreement. The standard-error-of-estimate (SEE), root-mean-square errors (RMSE), Normalized RMSE and correlation coefficient are 0.297 m, 2.087 m, 6.899% and 0.971 respectively. Sensitivity analysis is also carried out, and it is found that the model is sensitive to recharge, particularly when the rate is greater than (15mm/year). Hydraulic conductivity is found to be another parameter which can affect the results significantly, therefore it requires high quality field data. The results show that there is a general flow pattern from the west to east of the study area, which agrees well with the observations and the gradient of the ground surface. It is found that with the current operational pumping rates of the wells in the area, a dry area is resulted in Al-Najaf City due to the large quantity of groundwater withdrawn. The computed water balance with the current operational pumping quantity shows that the Euphrates River supplies water into the groundwater of approximately 11759 m3/day, instead of gaining water of 11178 m3/day from the groundwater if no pumping from the wells. It is expected that the results obtained from the study can provide important information for the sustainable and effective planning and management of the regional groundwater resources for Al-Najaf City.

Keywords: Al-Najaf city, conceptual modelling, groundwater, unconfined aquifer, visual MODFLOW.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855
5 Simulation and Optimization of Mechanisms made of Micro-molded Components

Authors: Albert Albers, Pablo Enrique Leslabay

Abstract:

The Institute of Product Development is dealing with the development, design and dimensioning of micro components and systems as a member of the Collaborative Research Centre 499 “Design, Production and Quality Assurance of Molded micro components made of Metallic and Ceramic Materials". Because of technological restrictions in the miniaturization of conventional manufacturing techniques, shape and material deviations cannot be scaled down in the same proportion as the micro parts, rendering components with relatively wide tolerance fields. Systems that include such components should be designed with this particularity in mind, often requiring large clearance. On the end, the output of such systems results variable and prone to dynamical instability. To save production time and resources, every study of these effects should happen early in the product development process and base on computer simulation to avoid costly prototypes. A suitable method is proposed here and exemplary applied to a micro technology demonstrator developed by the CRC499. It consists of a one stage planetary gear train in a sun-planet-ring configuration, with input through the sun gear and output through the carrier. The simulation procedure relies on ordinary Multi Body Simulation methods and subsequently adds other techniques to further investigate details of the system-s behavior and to predict its response. The selection of the relevant parameters and output functions followed the engineering standards for regular sized gear trains. The first step is to quantify the variability and to reveal the most critical points of the system, performed through a whole-mechanism Sensitivity Analysis. Due to the lack of previous knowledge about the system-s behavior, different DOE methods involving small and large amount of experiments were selected to perform the SA. In this particular case the parameter space can be divided into two well defined groups, one of them containing the gear-s profile information and the other the components- spatial location. This has been exploited to explore the different DOE techniques more promptly. A reduced set of parameters is derived for further investigation and to feed the final optimization process, whether as optimization parameters or as external perturbation collective. The 10 most relevant perturbation factors and 4 to 6 prospective variable parameters are considered in a new, simplified model. All of the parameters are affected by the mentioned production variability. The objective functions of interest are based on scalar output-s variability measures, so the problem becomes an optimization under robustness and reliability constrains. The study shows an initial step on the development path of a method to design and optimize complex micro mechanisms composed of wide tolerated elements accounting for the robustness and reliability of the systems- output.

Keywords: Micro molded components, Optimization, Robustness und Reliability, Simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
4 Rotary Machine Sealing Oscillation Frequencies and Phase Shift Analysis

Authors: Liliia N. Butymova, Vladimir Ya Modorskii

Abstract:

To ensure the gas transmittal GCU's efficient operation, leakages through the labyrinth packings (LP) should be minimized. Leakages can be minimized by decreasing the LP gap, which in turn depends on thermal processes and possible rotor vibrations and is designed to ensure absence of mechanical contact. Vibration mitigation allows to minimize the LP gap. It is advantageous to research influence of processes in the dynamic gas-structure system on LP vibrations. This paper considers influence of rotor vibrations on LP gas dynamics and influence of the latter on the rotor structure within the FSI unidirectional dynamical coupled problem. Dependences of nonstationary parameters of gas-dynamic process in LP on rotor vibrations under various gas speeds and pressures, shaft rotation speeds and vibration amplitudes, and working medium features were studied. The programmed multi-processor ANSYS CFX was chosen as a numerical computation tool. The problem was solved using PNRPU high-capacity computer complex. Deformed shaft vibrations are replaced with an unyielding profile that moves in the fixed annulus "up-and-down" according to set harmonic rule. This solves a nonstationary gas-dynamic problem and determines time dependence of total gas-dynamic force value influencing the shaft. Pressure increase from 0.1 to 10 MPa causes growth of gas-dynamic force oscillation amplitude and frequency. The phase shift angle between gas-dynamic force oscillations and those of shaft displacement decreases from 3π/4 to π/2. Damping constant has maximum value under 1 MPa pressure in the gap. Increase of shaft oscillation frequency from 50 to 150 Hz under P=10 MPa causes growth of gas-dynamic force oscillation amplitude. Damping constant has maximum value at 50 Hz equaling 1.012. Increase of shaft vibration amplitude from 20 to 80 µm under P=10 MPa causes the rise of gas-dynamic force amplitude up to 20 times. Damping constant increases from 0.092 to 0.251. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the minimum gas-dynamic force persistent oscillating amplitude under P=0.1 MPa being observed in methane, and maximum in the air. Frequency remains almost unchanged and the phase shift in the air changes from 3π/4 to π/2. Calculations for various working substances (methane, perfect gas, air at 25 ˚С) prove the maximum gas-dynamic force oscillating amplitude under P=10 MPa being observed in methane, and minimum in the air. Air demonstrates surging. Increase of leakage speed from 0 to 20 m/s through LP under P=0.1 MPa causes the gas-dynamic force oscillating amplitude to decrease by 3 orders and oscillation frequency and the phase shift to increase 2 times and stabilize. Increase of leakage speed from 0 to 20 m/s in LP under P=1 MPa causes gas-dynamic force oscillating amplitude to decrease by almost 4 orders. The phase shift angle increases from π/72 to π/2. Oscillations become persistent. Flow rate proved to influence greatly on pressure oscillations amplitude and a phase shift angle. Work medium influence depends on operation conditions. At pressure growth, vibrations are mostly affected in methane (of working substances list considered), and at pressure decrease, in the air at 25 ˚С.

Keywords: Aeroelasticity, labyrinth packings, oscillation phase shift, vibration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524
3 Organizational Commitment of Anadolu University Open Education Faculty Students

Authors: Emine Demiray, Şensu Curabay

Abstract:

Distance education program is a dimension of contemporary and new education technologies. Concepts and applications in this field are the results of a series of educational demands and developments in various communication and education technologies. Distance education applications have some conceptual bases. These are creating new education opportunities, realizing work-education unity, getting democratic in education, lifelong education, tendency to individual matters, effective use of institutions, integration of technology and education, tendency to individual and social needs, taking three dimensional integration as the main principle (publishing, printed materials and face to face education), reaching maximum mass, individual and mass education integrity and education demand and financial matters balance. Economics, Business Administration and Open Education faculties, which have been giving education within Anadolu University since 1982 in Turkey, are carrying on education with nearly 1.000.000 students. The aim of this study is to determine organizational commitment levels of students who have been studying at Anadolu University Economics, Business Administration and Open Education faculties in the scope of affective, continuance and nominative commitment in Allen&Meyer model. In the study, organizational commitment of the Economics, Business Administration and Open Education faculty students, who are receiving education by means of distance education, to their faculties is dealt after introducing Anadolu University Distance Education system which gives higher education via distance education method in Turkey. In order to increase the success level of faculties it is required for students to have high level of organizational commitment to their faculties. A questionnaire has been applied by using “Organizational Commitment Scale", developed by Meyer&Allen to determine organizational commitments of Economics, Business Administration and Open Education students. Organizational commitment is dealt with as affective, continuance and nominative commitment. The questionnaire was applied face to face to randomly chosen 500 students living in Eskişehir and the data was downloaded to the computer by using SPSS program and the results were analyzed in terms of demographic features (gender, age, marital status, years of study, work and income level) of students by using frequency test, ttest and ANOVA test. As a result of these analyses, when the comments of Open Education Faculty students on levels of affective, continuance and nominative commitment to their faculties were examined, it has been revealed that continuance commitment level has the highest rate. Among the female participants; continuance commitment is high in the age range of 30-40, for normative commitment it is 17-22. However no dominant age range was defined for affective commitment. Regarding the marital status; continuance commitment average is higher among married participants; but nominative affective commitment average is higher among single participants. As to the years of study, affective and continuance commitment is higher among senior students while normative commitment is higher among junior students. Moreover; in terms of continuance, affective and normative commitment, those who do not work and have low income have higher level of all there commitment types than those who work and have relatively high income.

Keywords: Open education, Organizational commitment, Distance education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961