Search results for: computer aided instructional package
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3708

Search results for: computer aided instructional package

48 Simulation Research of Innovative Ignition System of ASz62IR Radial Aircraft Engine

Authors: Miroslaw Wendeker, Piotr Kacejko, Mariusz Duk, Pawel Karpinski

Abstract:

The research in the field of aircraft internal combustion engines is currently driven by the needs of decreasing fuel consumption and CO2 emissions, while fulfilling the level of safety. Currently, reciprocating aircraft engines are found in sports, emergency, agricultural and recreation aviation. Technically, they are most at a pre-war knowledge of the theory of operation, design and manufacturing technology, especially if compared to that high level of development of automotive engines. Typically, these engines are driven by carburetors of a quite primitive construction. At present, due to environmental requirements and dealing with a climate change, it is beneficial to develop aircraft piston engines and adopt the achievements of automotive engineering such as computer-controlled low-pressure injection, electronic ignition control and biofuels. The paper describes simulation research of the innovative power and control systems for the aircraft radial engine of high power. Installing an electronic ignition system in the radial aircraft engine is a fundamental innovative idea of this solution. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. In this framework, this research work focuses on describing a methodology for optimizing the electronically controlled ignition system. This attempt can reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. New, redundant elements of the control system can improve the safety of aircraft. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. The simulation research aimed to determine the vulnerability of the values measured (they were planned as the quantities measured by the measurement systems) to determining the optimal ignition angle (the angle of maximum torque at a given operating point). The described results covered: a) research in steady states; b) velocity ranging from 1500 to 2200 rpm (every 100 rpm); c) loading ranging from propeller power to maximum power; d) altitude ranging according to the International Standard Atmosphere from 0 to 8000 m (every 1000 m); e) fuel: automotive gasoline ES95. The three models of different types of ignition coil (different energy discharge) were studied. The analysis aimed at the optimization of the design of the innovative ignition system for an aircraft engine. The optimization involved: a) the optimization of the measurement systems; b) the optimization of actuator systems. The studies enabled the research on the vulnerability of the signals to the control of the ignition timing. Accordingly, the number and type of sensors were determined for the ignition system to achieve its optimal performance. The results confirmed the limited benefits, in terms of fuel consumption. Thus, including spark management in the optimization is mandatory to significantly decrease the fuel consumption. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: piston engine, radial engine, ignition system, CFD model, engine optimization

Procedia PDF Downloads 387
47 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning

Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar

Abstract:

Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.

Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence

Procedia PDF Downloads 78
46 Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow

Authors: Masood Otarod, Ronald M. Supkowski

Abstract:

This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.

Keywords: factorization, general dispersion model, isotope transient kinetic, partial differential equations

Procedia PDF Downloads 269
45 A Model to Assess Sustainability Using Multi-Criteria Analysis and Geographic Information Systems: A Case Study

Authors: Antonio Boggia, Luisa Paolotti, Gianluca Massei, Lucia Rocchi, Elaine Pace, Maria Attard

Abstract:

The aim of this paper is to present a methodology and a computer model for sustainability assessment based on the integration of Multi-criteria Decision Analysis (MCDA) with a Geographic Information System (GIS). It presents the result of a study for the implementation of a model for measuring sustainability to address the policy actions for the improvement of sustainability at territory level. The aim is to rank areas in order to understand the specific technical and/or financial support that is required to develop sustainable growth. Assessing sustainable development is a multidimensional problem: economic, social and environmental aspects have to be taken into account at the same time. The tool for a multidimensional representation is a proper set of indicators. The set of indicators must be integrated into a model, that is an assessment methodology, to be used for measuring sustainability. The model, developed by the Environmental Laboratory of the University of Perugia, is called GeoUmbriaSUIT. It is a calculation procedure developed as a plugin working in the open-source GIS software QuantumGIS. The multi-criteria method used within GeoUmbriaSUIT is the algorithm TOPSIS (Technique for Order Preference by Similarity to Ideal Design), which defines a ranking based on the distance from the worst point and the closeness to an ideal point, for each of the criteria used. For the sustainability assessment procedure, GeoUmbriaSUIT uses a geographic vector file where the graphic data represent the study area and the single evaluation units within it (the alternatives, e.g. the regions of a country, or the municipalities of a region), while the alphanumeric data (attribute table), describe the environmental, economic and social aspects related to the evaluation units by means of a set of indicators (criteria). The use of the algorithm available in the plugin allows to treat individually the indicators representing the three dimensions of sustainability, and to compute three different indices: environmental index, economic index and social index. The graphic output of the model allows for an integrated assessment of the three dimensions, avoiding aggregation. The presence of separate indices and graphic output make GeoUmbriaSUIT a readable and transparent tool, since it doesn’t produce an aggregate index of sustainability as final result of the calculations, which is often cryptic and difficult to interpret. In addition, it is possible to develop a “back analysis”, able to explain the positions obtained by the alternatives in the ranking, based on the criteria used. The case study presented is an assessment of the level of sustainability in the six regions of Malta, an island state in the middle of the Mediterranean Sea and the southernmost member of the European Union. The results show that the integration of MCDA-GIS is an adequate approach for sustainability assessment. In particular, the implemented model is able to provide easy to understand results. This is a very important condition for a sound decision support tool, since most of the time decision makers are not experts and need understandable output. In addition, the evaluation path is traceable and transparent.

Keywords: GIS, multi-criteria analysis, sustainability assessment, sustainable development

Procedia PDF Downloads 292
44 Tales of Two Cities: 'Motor City' Detroit and 'King Cotton' Manchester: Transatlantic Transmissions and Transformations, Flows of Communications, Commercial and Cultural Connections

Authors: Dominic Sagar

Abstract:

Manchester ‘King Cotton’, the first truly industrial city of the nineteenth century, passing on the baton to Detroit ‘Motor City’, is the first truly modern city. We are exploring the tales of the two cities, their rise and fall and subsequent post-industrial decline, their transitions and transformations, whilst alongside paralleling their corresponding, commercial, cultural, industrial and even agricultural, artistic and musical transactions and connections. The paper will briefly contextualize how technologies of the industrial age and modern age have been instrumental in the development of these cities and other similar cities including New York. However, the main focus of the study will be the present and more importantly the future, how globalisation and the advancements of digital technologies and industries have shaped the cities developments from AlanTuring and the making of the first programmable computer to the effect of digitalisation and digital initiatives. Manchester now has a thriving creative digital infrastructure of Digilabs, FabLabs, MadLabs and hubs, the study will reference the Smart Project and the Manchester Digital Development Association whilst paralleling similar digital and creative industrial initiatives now starting to happen in Detroit. The paper will explore other topics including the need to allow for zones of experimentation, areas to play, think and create in order develop and instigate new initiatives and ideas of production, carrying on the tradition of influential inventions throughout the history of these key cities. Other topics will be briefly touched on, such as urban farming, citing the Biospheric foundation in Manchester and other similar projects in Detroit. However, the main thread will focus on the music industries and how they are contributing to the regeneration of cities. Musically and artistically, Manchester and Detroit have been closely connected by the flow and transmission of information and transfer of ideas via ‘cars and trains and boats and planes’ through to the new ‘super highway’. From Detroit to Manchester often via New York and Liverpool and back again, these musical and artistic connections and flows have greatly affected and influenced both cities and the advancement of technology are still connecting the cities. In summary two hugely important industrial cities, subsequently both experienced massive decline in fortunes, having had their large industrial hearts ripped out, ravaged leaving dying industrial carcasses and car crashes of despair, dereliction, desolation and post-industrial wastelands vacated by a massive exodus of the cities’ inhabitants. To examine the affinity, similarity and differences between Manchester & Detroit, from their industrial importance to their post-industrial decline and their current transmutations, transformations, transient transgressions, cities in transition; contrasting how they have dealt with these problems and how they can learn from each other. With a view to framing these topics with regard to how various communities have shaped these cities and the creative industries and design [the new cotton/car manufacturing industries] are reinventing post-industrial cities, to speculate on future development of these themes in relation to Globalisation, digitalisation and how cities can function to develop solutions to communal living in cities of the future.

Keywords: cultural capital, digital developments, musical initiatives, zones of experimentation

Procedia PDF Downloads 195
43 EEG and DC-Potential Level Сhanges in the Elderly

Authors: Irina Deputat, Anatoly Gribanov, Yuliya Dzhos, Alexandra Nekhoroshkova, Tatyana Yemelianova, Irina Bolshevidtseva, Irina Deryabina, Yana Kereush, Larisa Startseva, Tatyana Bagretsova, Irina Ikonnikova

Abstract:

In the modern world the number of elderly people increases. Preservation of functionality of an organism in the elderly becomes very important now. During aging the higher cortical functions such as feelings, perception, attention, memory, and ideation are gradual decrease. It is expressed in the rate of information processing reduction, volume of random access memory loss, ability to training and storing of new information decrease. Perspective directions in studying of aging neurophysiological parameters are brain imaging: computer electroencephalography, neuroenergy mapping of a brain, and also methods of studying of a neurodynamic brain processes. Research aim – to study features of a brain aging in elderly people by electroencephalogram (EEG) and the DC-potential level. We examined 130 people aged 55 - 74 years that did not have psychiatric disorders and chronic states in a decompensation stage. EEG was recorded with a 128-channel GES-300 system (USA). EEG recordings are collected while the participant sits at rest with their eyes closed for 3 minutes. For a quantitative assessment of EEG we used the spectral analysis. The range was analyzed on delta (0,5–3,5 Hz), a theta - (3,5–7,0 Hz), an alpha 1-(7,0–11,0 Hz) an alpha 2-(11–13,0 Hz), beta1-(13–16,5 Hz) and beta2-(16,5–20 Hz) ranges. In each frequency range spectral power was estimated. The 12-channel hardware-software diagnostic ‘Neuroenergometr-KM’ complex was applied for registration, processing and the analysis of a brain constant potentials level. The DC-potential level registered in monopolar leads. It is revealed that the EEG of elderly people differ in higher rates of spectral power in the range delta (р < 0,01) and a theta - (р < 0,05) rhythms, especially in frontal areas in aging. By results of the comparative analysis it is noted that elderly people 60-64 aged differ in higher values of spectral power alfa-2 range in the left frontal and central areas (р < 0,05) and also higher values beta-1 range in frontal and parieto-occipital areas (р < 0,05). Study of a brain constant potential level distribution revealed increase of total energy consumption on the main areas of a brain. In frontal leads we registered the lowest values of constant potential level. Perhaps it indicates decrease in an energy metabolism in this area and difficulties of executive functions. The comparative analysis of a potential difference on the main assignments testifies to unevenness of a lateralization of a brain functions at elderly people. The results of a potential difference between right and left hemispheres testify to prevalence of the left hemisphere activity. Thus, higher rates of functional activity of a cerebral cortex are peculiar to people of early advanced age (60-64 years) that points to higher reserve opportunities of central nervous system. By 70 years there are age changes of a cerebral power exchange and level of electrogenesis of a brain which reflect deterioration of a condition of homeostatic mechanisms of self-control and the program of processing of the perceptual data current flow.

Keywords: brain, DC-potential level, EEG, elderly people

Procedia PDF Downloads 486
42 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 75
41 Comparison of On-Site Stormwater Detention Policies in Australian and Brazilian Cities

Authors: Pedro P. Drumond, James E. Ball, Priscilla M. Moura, Márcia M. L. P. Coelho

Abstract:

In recent decades, On-site Stormwater Detention (OSD) systems have been implemented in many cities around the world. In Brazil, urban drainage source control policies were created in the 1990’s and were mainly based on OSD. The concept of this technique is to promote the detention of additional stormwater runoff caused by impervious areas, in order to maintain pre-urbanization peak flow levels. In Australia OSD, was first adopted in the early 1980’s by the Ku-ring-gai Council in Sydney’s northern suburbs and Wollongong City Council. Many papers on the topic were published at that time. However, source control techniques related to stormwater quality have become to the forefront and OSD has been relegated to the background. In order to evaluate the effectiveness of the current regulations regarding OSD, the existing policies were compared in Australian cities, a country considered experienced in the use of this technique, and in Brazilian cities where OSD adoption has been increasing. The cities selected for analysis were Wollongong and Belo Horizonte, the first municipalities to adopt OSD in their respective countries, and Sydney and Porto Alegre, cities where these policies are local references. The Australian and Brazilian cities are located in Southern Hemisphere of the planet and similar rainfall intensities can be observed, especially in storm bursts greater than 15 minutes. Regarding technical criteria, Brazilian cities have a site-based approach, analyzing only on-site system drainage. This approach is criticized for not evaluating impacts on urban drainage systems and in rare cases may cause the increase of peak flows downstream. The city of Wollongong and most of the Sydney Councils adopted a catchment-based approach, requiring the use of Permissible Site Discharge (PSD) and Site Storage Requirements (SSR) values based on analysis of entire catchments via hydrograph-producing computer models. Based on the premise that OSD should be designed to dampen storms of 100 years Average Recurrence Interval (ARI) storm, the values of PSD and SSR in these four municipalities were compared. In general, Brazilian cities presented low values of PSD and high values of SSR. This can be explained by site-based approach and the low runoff coefficient value adopted for pre-development conditions. The results clearly show the differences between approaches and methodologies adopted in OSD designs among Brazilian and Australian municipalities, especially with regard to PSD values, being on opposite sides of the scale. However, lack of research regarding the real performance of constructed OSD does not allow for determining which is best. It is necessary to investigate OSD performance in a real situation, assessing the damping provided throughout its useful life, maintenance issues, debris blockage problems and the parameters related to rain-flow methods. Acknowledgments: The authors wish to thank CNPq - Conselho Nacional de Desenvolvimento Científico e Tecnológico (Chamada Universal – MCTI/CNPq Nº 14/2014), FAPEMIG - Fundação de Amparo à Pesquisa do Estado de Minas Gerais, and CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior for their financial support.

Keywords: on-site stormwater detention, source control, stormwater, urban drainage

Procedia PDF Downloads 181
40 Providing Leadership in Nigerian University Education Research Enterprise: The Imperative of Research Ethics

Authors: O. O. Oku, K. S. Jerry-Alagbaoso

Abstract:

It is universally acknowledged that the primary function of universities is the generation and dissemination of knowledge. This mission is pursued through the research component of the university programme especially at the post-graduate level. The senior academic staff teach, supervise and provide general academic leadership to post-graduate students who are expected to carry out research leading to the presentation of dissertation as requirement for the award of doctoral degree in their various disciplines. Carrying out the research enterprises involves a lot of corroboration among individuals and communities. The need to safeguard the interest of everyone involved in the enterprise makes the development of ethical standard in research imperative. Ensuring the development and effective application of such ethical standard falls within the leadership role of the vice –chancellors, Deans of post-graduate schools/ faculties, Heads of Departments and supervisors. It is the relevance and application of such ethical standard in Nigerian university research efforts that this study discussed. The study adopted the descriptive research design. A researcher-made 4 point rating scale was used to elicit information from the post-graduate dissertation supervisors sampled from one university each from the six geo-political zones in Nigeria using the purposive sampling technique. The data collected was analysed using the mean score and standard deviation. The findings of the study include among others that there are several cases of unethical practices by Ph.D dissertation students in Nigerian universities. Prominent among these include duplicating research topics, making unauthorized copies of data paper or computer programme, failing to acknowledge contributions of relevant people and authors, rigging an experiment to prempt the result among others. Some of the causes of the unethical practices according to the respondents include inadequate funding of universities resulting in inadequate remuneration for university teachers, inadequacy of equipment and infrastructures, poor supervision of Ph.D students,’ poverty on the side of the student researchers and non-application of sanctions on violators. Improved funding of the Nigerian universities system with emphasis on both staff and student research efforts, admitting academic oriented students into the Ph.D programme and ensuring the application of appropriate sanctions in cases of unethical conduct in research featured prominently in the needed leadership imperatives. Based on the findings of the study, the researchers recommend the development of university research policies that is closely tied to each university’s strategic plan. Such plan should explain the research focus that will attract more funding and direct students interest towards it without violating the principle of academic freedom. The plan should also incorporate the establishment of a research administration office to provide the necessary link between the students and funding agencies and also organise training for supervisors on leadership activities expected of them while educating students on the processes involved in carrying out a qualitative and acceptable research study. Such exercise should include the ethical principles and guidelines that comprise all parts of research from research topic through the literature review to the design and the truthful reporting of results.

Keywords: academic leadership, ethical standards, research stakeholders, research enterprise

Procedia PDF Downloads 244
39 A Vision-Based Early Warning System to Prevent Elephant-Train Collisions

Authors: Shanaka Gunasekara, Maleen Jayasuriya, Nalin Harischandra, Lilantha Samaranayake, Gamini Dissanayake

Abstract:

One serious facet of the worsening Human-Elephant conflict (HEC) in nations such as Sri Lanka involves elephant-train collisions. Endangered Asian elephants are maimed or killed during such accidents, which also often result in orphaned or disabled elephants, contributing to the phenomenon of lone elephants. These lone elephants are found to be more likely to attack villages and showcase aggressive behaviour, which further exacerbates the overall HEC. Furthermore, Railway Services incur significant financial losses and disruptions to services annually due to such accidents. Most elephant-train collisions occur due to a lack of adequate reaction time. This is due to the significant stopping distance requirements of trains, as the full braking force needs to be avoided to minimise the risk of derailment. Thus, poor driver visibility at sharp turns, nighttime operation, and poor weather conditions are often contributing factors to this problem. Initial investigations also indicate that most collisions occur in localised “hotspots” where elephant pathways/corridors intersect with railway tracks that border grazing land and watering holes. Taking these factors into consideration, this work proposes the leveraging of recent developments in Convolutional Neural Network (CNN) technology to detect elephants using an RGB/infrared capable camera around known hotspots along the railway track. The CNN was trained using a curated dataset of elephants collected on field visits to elephant sanctuaries and wildlife parks in Sri Lanka. With this vision-based detection system at its core, a prototype unit of an early warning system was designed and tested. This weatherised and waterproofed unit consists of a Reolink security camera which provides a wide field of view and range, an Nvidia Jetson Xavier computing unit, a rechargeable battery, and a solar panel for self-sufficient functioning. The prototype unit was designed to be a low-cost, low-power and small footprint device that can be mounted on infrastructures such as poles or trees. If an elephant is detected, an early warning message is communicated to the train driver using the GSM network. A mobile app for this purpose was also designed to ensure that the warning is clearly communicated. A centralized control station manages and communicates all information through the train station network to ensure coordination among important stakeholders. Initial results indicate that detection accuracy is sufficient under varying lighting situations, provided comprehensive training datasets that represent a wide range of challenging conditions are available. The overall hardware prototype was shown to be robust and reliable. We envision a network of such units may help contribute to reducing the problem of elephant-train collisions and has the potential to act as an important surveillance mechanism in dealing with the broader issue of human-elephant conflicts.

Keywords: computer vision, deep learning, human-elephant conflict, wildlife early warning technology

Procedia PDF Downloads 226
38 Hydraulic Headloss in Plastic Drainage Pipes at Full and Partially Full Flow

Authors: Velitchko G. Tzatchkov, Petronilo E. Cortes-Mejia, J. Manuel Rodriguez-Varela, Jesus Figueroa-Vazquez

Abstract:

Hydraulic headloss, expressed by the values of friction factor f and Manning’s coefficient n, is an important parameter in designing drainage pipes. Their values normally are taken from manufacturer recommendations, many times without sufficient experimental support. To our knowledge, currently there is no standard procedure for hydraulically testing such pipes. As a result of research carried out at the Mexican Institute of Water Technology, a laboratory testing procedure was proposed and applied on 6 and 12 inches diameter polyvinyl chloride (PVC) and high-density dual wall polyethylene pipe (HDPE) drainage pipes. While the PVC pipe is characterized by naturally smooth interior and exterior walls, the dual wall HDPE pipe has corrugated exterior wall and, although considered smooth, a slightly wavy interior wall. The pipes were tested at full and partially full pipe flow conditions. The tests for full pipe flow were carried out on a 31.47 m long pipe at flow velocities between 0.11 and 4.61 m/s. Water was supplied by gravity from a 10 m-high tank in some of the tests, and from a 3.20 m-high tank in the rest of the tests. Pressure was measured independently with piezometer readings and pressure transducers. The flow rate was measured by an ultrasonic meter. For the partially full pipe flow the pipe was placed inside an existing 49.63 m long zero slope (horizontal) channel. The flow depth was measured by piezometers located along the pipe, for flow rates between 2.84 and 35.65 L/s, measured by a rectangular weir. The observed flow profiles were then compared to computer generated theoretical gradually varied flow profiles for different Manning’s n values. It was found that Manning’s n, that normally is assumed constant for a given pipe material, is in fact dependent on flow velocity and pipe diameter for full pipe flow, and on flow depth for partially full pipe flow. Contrary to the expected higher values of n and f for the HDPE pipe, virtually the same values were obtained for the smooth interior wall PVC pipe and the slightly wavy interior wall HDPE pipe. The explanation of this fact was found in Henry Morris’ theory for smooth turbulent conduit flow over isolated roughness elements. Following Morris, three categories of the flow regimes are possible in a rough conduit: isolated roughness (or semi smooth turbulent) flow, wake interference (or hyper turbulent) flow, and skimming (or quasi-smooth) flow. Isolated roughness flow is characterized by friction drag turbulence over the wall between the roughness elements, independent vortex generation, and dissipation around each roughness element. In this regime, the wake and vortex generation zones at each element develop and dissipate before attaining the next element. The longitudinal spacing of the roughness elements and their height are important influencing agents. Given the slightly wavy form of the HDPE pipe interior wall, the flow for this type of pipe belongs to this category. Based on that theory, an equation for the hydraulic friction factor was obtained. The obtained coefficient values are going to be used in the Mexican design standards.

Keywords: drainage plastic pipes, hydraulic headloss, hydraulic friction factor, Manning’s n

Procedia PDF Downloads 283
37 Modelling Pest Immigration into Rape Seed Crops under Past and Future Climate Conditions

Authors: M. Eickermann, F. Ronellenfitsch, J. Junk

Abstract:

Oilseed rape (Brassica napus L.) is one of the most important crops throughout Europe, but pressure due to pest insects and pathogens can reduce yield amount substantially. Therefore, the usage of pesticide applications is outstanding in this crop. In addition, climate change effects can interact with phenology of the host plant and their pests and can apply additional pressure on the yield. Next to the pollen beetle, Meligethes aeneus L., the seed-damaging pest insects, cabbage seed weevil (Ceutorhynchus obstrictus Marsham) and the brassica pod midge (Dasineura brassicae Winn.) are of main economic impact to the yield. While females of C. obstrictus are infesting oilseed rape by depositing single eggs into young pods, the females of D. brassicae are using this local damage in the pod for their own oviposition, while depositing batches of 20-30 eggs. Without a former infestation by the cabbage seed weevil, a significant yield reduction by the brassica pod midge can be denied. Based on long-term, multisided field experiments, a comprehensive data-set on pest migration to crops of B. napus has been built up in the last ten years. Five observational test sides, situated in different climatic regions in Luxembourg were controlled between February until the end of May twice a week. Pest migration was recorded by using yellow water pan-traps. Caught insects were identified in the laboratory according to species specific identification keys. By a combination of pest observations and corresponding meteorological observations, the set-up of models to predict the migration periods of the seed-damaging pests was possible. This approach is the basis for a computer-based decision support tool, to assist the farmer in identifying the appropriate time point of pesticide application. In addition, the derived algorithms of that decision support tool can be combined with climate change projections in order to assess the future potential threat caused by the seed-damaging pest species. Regional climate change effects for Luxembourg have been intensively studied in recent years. Significant changes to wetter winters and drier summers, as well as a prolongation of the vegetation period mainly caused by higher spring temperature, have also been reported. We used the COSMO-CLM model to perform a time slice experiment for Luxembourg with a spatial resolution of 1.3 km. Three ten year time slices were calculated: The reference time span (1991-2000), the near (2041-2050) and the far future (2091-2100). Our results projected a significant shift of pest migration to an earlier onset of the year. In addition, a prolongation of the possible migration period could be observed. Because D. brassiace is depending on the former oviposition activity by C. obstrictus to infest its host plant successfully, the future dependencies of both pest species will be assessed. Based on this approach the future risk potential of both seed-damaging pests is calculated and the status as pest species is characterized.

Keywords: CORDEX projections, decision support tool, Brassica napus, pests

Procedia PDF Downloads 382
36 Evaluation of the Suitability of a Microcapsule-Based System for the Manufacturing of Self-Healing Low-Density Polyethylene

Authors: Małgorzata Golonka, Jadwiga Laska

Abstract:

Among self-healing materials, the most unexplored group are thermoplastic polymers. These polymers are used not only to produce packaging with a relatively short life but also to obtain coatings, insulation, casings, or parts of machines and devices. Due to its exceptional resistance to weather conditions, hydrophobicity, sufficient mechanical strength, and ease of extrusion, polyethylene is used in the production of polymer pipelines and as an insulating layer for steel pipelines. Polyethylene or PE coated steel pipelines can be used in difficult conditions such as underground or underwater installations. Both installation and use under such conditions are associated with high stresses and consequently the formation of microdamages in the structure of the material, loss of its integrity and final applicability. The ideal solution would be to include a self-healing system in the polymer material. In the presented study the behavior of resin-coated microcapsules in the extrusion process of low-density polyethylene was examined. Microcapsules are a convenient element of the repair system because they can be filled with appropriate reactive substances to ensure the repair process, but the main problem is their durability under processing conditions. Rapeseed oil, which has a relatively high boiling point of 240⁰C and low volatility, was used as the core material that simulates the reactive agents. The capsule shell, which is a key element responsible for its mechanical strength, was obtained by in situ polymerising urea-formaldehyde, melamine-urea-formaldehyde or melamine-formaldehyde resin on the surface of oil droplets dispersed in water. The strength of the capsules was compared based on the shell material, and in addition, microcapsules with single- and multilayer shells were obtained using different combinations of the chemical composition of the resins. For example, the first layer of appropriate tightness and stiffness was made of melamine-urea-formaldehyde resin, and the second layer was a melamine-formaldehyde reinforcing layer. The size, shape, distribution of capsule diameters and shell thickness were determined using digital optical microscopy and electron microscopy. The efficiency of encapsulation (i.e., the presence of rapeseed oil as the core) and the tightness of the shell were determined by FTIR spectroscopic examination. The mechanical strength and distribution of microcapsules in polyethylene were tested by extruding samples of crushed low-density polyethylene mixed with microcapsules in a ratio of 1 and 2.5% by weight. The extrusion process was carried out in a mini extruder at a temperature of 150⁰C. The capsules obtained had a diameter range of 70-200 µm. FTIR analysis confirmed the presence of rapeseed oil in both single- and multilayer shell microcapsules. Microscopic observations of cross sections of the extrudates confirmed the presence of both intact and cracked microcapsules. However, the melamine-formaldehyde resin shells showed higher processing strength compared to that of the melamine-urea-formaldehyde coating and the urea-formaldehyde coating. Capsules with a urea-formaldehyde shell work very well in resin coating systems and cement composites, i.e., in pressureless processing and moulding conditions. The addition of another layer of melamine-formaldehyde coating to both the melamine-urea-formaldehyde and melamine-formaldehyde resin layers significantly increased the number of microcapsules undamaged during the extrusion process. The properties of multilayer coatings were also determined and compared with each other using computer modelling.

Keywords: self-healing polymers, polyethylene, microcapsules, extrusion

Procedia PDF Downloads 31
35 Artificial Intelligence in Management Simulators

Authors: Nuno Biga

Abstract:

Artificial Intelligence (AI) has the potential to transform management into several impactful ways. It allows machines to interpret information to find patterns in big data and learn from context analysis, optimize operations, make predictions sensitive to each specific situation and support data-driven decision making. The introduction of an 'artificial brain' in organization also enables learning through complex information and data provided by those who train it, namely its users. The "Assisted-BIGAMES" version of the Accident & Emergency (A&E) simulator introduces the concept of a "Virtual Assistant" (VA) sensitive to context, that provides users useful suggestions to pursue the following operations such as: a) to relocate workstations in order to shorten travelled distances and minimize the stress of those involved; b) to identify in real time existing bottleneck(s) in the operations system so that it is possible to quickly act upon them; c) to identify resources that should be polyvalent so that the system can be more efficient; d) to identify in which specific processes it may be advantageous to establish partnership with other teams; and e) to assess possible solutions based on the suggested KPIs allowing action monitoring to guide the (re)definition of future strategies. This paper is built on the BIGAMES© simulator and presents the conceptual AI model developed and demonstrated through a pilot project (BIG-AI). Each Virtual Assisted BIGAME is a management simulator developed by the author that guides operational and strategic decision making, providing users with useful information in the form of management recommendations that make it possible to predict the actual outcome of different alternative management strategic actions. The pilot project developed incorporates results from 12 editions of the BIGAME A&E that took place between 2017 and 2022 at AESE Business School, based on the compilation of data that allows establishing causal relationships between decisions taken and results obtained. The systemic analysis and interpretation of data is powered in the Assisted-BIGAMES through a computer application called "BIGAMES Virtual Assistant" (VA) that players can use during the Game. Each participant in the VA permanently asks himself about the decisions he should make during the game to win the competition. To this end, the role of the VA of each team consists in guiding the players to be more effective in their decision making, through presenting recommendations based on AI methods. It is important to note that the VA's suggestions for action can be accepted or rejected by the managers of each team, as they gain a better understanding of the issues along time, reflect on good practice and rely on their own experience, capability and knowledge to support their own decisions. Preliminary results show that the introduction of the VA provides a faster learning of the decision-making process. The facilitator designated as “Serious Game Controller” (SGC) is responsible for supporting the players with further analysis. The recommended actions by the SGC may differ or be similar to the ones previously provided by the VA, ensuring a higher degree of robustness in decision-making. Additionally, all the information should be jointly analyzed and assessed by each player, who are expected to add “Emotional Intelligence”, an essential component absent from the machine learning process.

Keywords: artificial intelligence, gamification, key performance indicators, machine learning, management simulators, serious games, virtual assistant

Procedia PDF Downloads 105
34 Participation of Titanium Influencing the Petrological Assemblage of Mafic Dyke: Salem, South India

Authors: Ayoti Banerjee, Meenakshi Banerjee

Abstract:

The study of metamorphic reaction textures is important in contributing to our understanding of the evolution of metamorphic terranes. Where preserved, they provide information on changes in the P-T conditions during the metamorphic history of the rock, and thus allow us to speculate on the P-T-t evolution of the terrane. Mafic dykes have attracted the attention of petrologists because they act as window to mantle. This rock represents a mafic dyke of doleritic composition. It is fine to medium grained in which clinopyroxene are enclosed by the lath shaped plagioclase grains to form spectacular ophitic texture. At places, sub ophitic texture was also observed. Grains of pyroxene and plagioclase show very less deformation typically plagioclase showing deformed lamella along with plagioclase-clinopyroxene-phyric granoblastic fabric within a groundmass of feldspar microphenocrysts and Fe–Ti oxides. Both normal and reverse zoning were noted in the plagioclase laths. The clinopyroxene grains contain exsolved phases such as orthopyroxene, plagioclase, magnetite, ilmenite along the cleavage traces and the orthopyroxene lamella form granules in the periphery of the clinopyroxene grains. Garnet corona also develops preferentially around plagioclase at the contact of clinopyroxene, ilmenite or magnetite. Tiny quartz and K-fs grains showed symplectic intergrowth with garnet at a few places. The product quartz formed along with garnet rims the coronal garnet and the reacting clinopyroxene. Thin amphibole corona formed along the periphery of deformed plagioclase and clinopyroxene occur as patches over the magmatic minerals. The amphibole coronas cannot be assigned to a late magmatic stage and are interpreted as reactive being restricted to the contact between clinopyroxene and plagioclase, thus postdating the crystallization of both. The amphibole and garnet do not share grain boundary in the entire rock and is thus pointing towards simultaneous crystallization. Olivine is absent. Spectacular myrmekitic growth of orthoclase and quartz rimming the plagioclase is consistent with the potash metasomatic effects that is also found in other rocks of this region. These textural features are consistent with a phase of fluid induced metamorphism (retrogression). But the appearance of coronal garnet and amphibole exclusive of each other reflects the participation if Ti as the prime reason. Presence of Ti as a reactant phase is a must for amphibole forming reactions whereas it is not so in case of garnet forming reactions although the reactants are the same plagioclase and clinopyroxene in both cases. These findings are well validated by petrographical and textural analysis. In order to obtain balanced chemical reactions that explain formation of amphibole and garnet in the mafic dyke rocks a matrix operation technique called Singular Value Decomposition (SVD) was adopted utilizing the measured chemical compositions of the minerals. The computer program C-Space was used for this purpose and the required compositional matrix. Data fed to C-Space was after doing cation-calculation of the oxide percentages obtained from EPMA analysis. The Garnet-Clinopyroxene geothermometer yielded a temperature of 650 degrees Celsius. The Garnet-Clinopyroxene-Plagioclase geobarometer and Al-in amphibole yielded roughly 7.5 kbar pressure.

Keywords: corona, dolerite, geothermometer, metasomatism, metamorphic reaction texture, retrogression

Procedia PDF Downloads 280
33 Using AI Based Software as an Assessment Aid for University Engineering Assignments

Authors: Waleed Al-Nuaimy, Luke Anastassiou, Manjinder Kainth

Abstract:

As the process of teaching has evolved with the advent of new technologies over the ages, so has the process of learning. Educators have perpetually found themselves on the lookout for new technology-enhanced methods of teaching in order to increase learning efficiency and decrease ever expanding workloads. Shortly after the invention of the internet, web-based learning started to pick up in the late 1990s and educators quickly found that the process of providing learning material and marking assignments could change thanks to the connectivity offered by the internet. With the creation of early web-based virtual learning environments (VLEs) such as SPIDER and Blackboard, it soon became apparent that VLEs resulted in higher reported computer self-efficacy among students, but at the cost of students being less satisfied with the learning process . It may be argued that the impersonal nature of VLEs, and their limited functionality may have been the leading factors contributing to this reported dissatisfaction. To this day, often faced with the prospects of assigning colossal engineering cohorts their homework and assessments, educators may frequently choose optimally curated assessment formats, such as multiple-choice quizzes and numerical answer input boxes, so that automated grading software embedded in the VLEs can save time and mark student submissions instantaneously. A crucial skill that is meant to be learnt during most science and engineering undergraduate degrees is gaining the confidence in using, solving and deriving mathematical equations. Equations underpin a significant portion of the topics taught in many STEM subjects, and it is in homework assignments and assessments that this understanding is tested. It is not hard to see that this can become challenging if the majority of assignment formats students are engaging with are multiple-choice questions, and educators end up with a reduced perspective of their students’ ability to manipulate equations. Artificial intelligence (AI) has in recent times been shown to be an important consideration for many technologies. In our paper, we explore the use of new AI based software designed to work in conjunction with current VLEs. Using our experience with the software, we discuss its potential to solve a selection of problems ranging from impersonality to the reduction of educator workloads by speeding up the marking process. We examine the software’s potential to increase learning efficiency through its features which claim to allow more customized and higher-quality feedback. We investigate the usability of features allowing students to input equation derivations in a range of different forms, and discuss relevant observations associated with these input methods. Furthermore, we make ethical considerations and discuss potential drawbacks to the software, including the extent to which optical character recognition (OCR) could play a part in the perpetuation of errors and create disagreements between student intent and their submitted assignment answers. It is the intention of the authors that this study will be useful as an example of the implementation of AI in a practical assessment scenario insofar as serving as a springboard for further considerations and studies that utilise AI in the setting and marking of science and engineering assignments.

Keywords: engineering education, assessment, artificial intelligence, optical character recognition (OCR)

Procedia PDF Downloads 123
32 Simultech - Innovative Country-Wide Ultrasound Training Center

Authors: Yael Rieder, Yael Gilboa, S. O. Adva, Efrat Halevi, Ronnie Tepper

Abstract:

Background: Operation of ultrasound equipment is a core skill for many clinical specialties. As part of the training program at -Simultech- a simulation center for Ob\Gyn at the Meir Medical Center, Israel, teaching how to operate ultrasound equipment requires dealing with misunderstandings of spatial and 3D orientation, failure of the operator to hold a transducer correctly, and limited ability to evaluate the data on the screen. We have developed a platform intended to endow physicians and sonographers with clinical and operational skills of obstetric ultrasound. Simultech's simulations are focused on medical knowledge, risk management, technology operations and physician-patient communication. The simulations encompass extreme work conditions. Setup: Between eight and ten of the eight hundred and fifty physicians and sonographers of the Clalit health services from seven hospitals and eight community centers across Israel, participate in individual Ob/Gyn training sessions each week. These include Ob/Gyn specialists, experts, interns, and sonographers. Innovative teaching and training methodologies: The six-hour training program includes: (1) An educational computer program that challenges trainees to deal with medical questions based upon ultrasound pictures and films. (2) Sophisticated hands-on simulators that challenge the trainees to practice correct grip of the transducer, elucidate pathology, and practice daily tasks such as biometric measurements and analysis of sonographic data. (3) Participation in a video-taped simulation which focuses on physician-patient communications. In the simulation, the physician is required to diagnose the clinical condition of a hired actress based on the data she provides and by evaluating the assigned ultrasound films accordingly. Giving ‘bad news’ to the patient may put the physician in a stressful situation that must be properly managed. (4) Feedback at the end of each phase is provided by a designated trainer, not a physician, who is specially qualified by Ob\Gyn senior specialists. (5) A group exercise in which the trainer presents a medico-legal case in order to encourage the participants to use their own experience and knowledge to conduct a productive ‘brainstorming’ session. Medical cases are presented and analyzed by the participants together with the trainer's feedback. Findings: (1) The training methods and content that Simultech provides allows trainees to review their medical and communications skills. (2) Simultech training sessions expose physicians to both basic and new, up-to-date cases, refreshing and expanding the trainee's knowledge. (3) Practicing on advanced simulators enables trainees to understand the sonographic space and to implement the basic principles of ultrasound. (4) Communications simulations were found to be beneficial for trainees who were unaware of their interpersonal skills. The trainer feedback, supported by the recorded simulation, allows the trainee to draw conclusions about his performance. Conclusion: Simultech was found to contribute to physicians at all levels of clinical expertise who deal with ultrasound. A break in daily routine together with attendance at a neutral educational center can vastly improve performance and outlook.

Keywords: medical training, simulations, ultrasound, Simultech

Procedia PDF Downloads 280
31 Advances and Challenges in Assessing Students’ Learning Competencies in 21st Century Higher Education

Authors: O. Zlatkin-Troitschanskaia, J. Fischer, C. Lautenbach, H. A. Pant

Abstract:

In 21st century higher education (HE), the diversity among students has increased in recent years due to the internationalization and higher mobility. Offering and providing equal and fair opportunities based on students’ individual skills and abilities instead of their social or cultural background is one of the major aims of HE. In this context, valid, objective and transparent assessments of students’ preconditions and academic competencies in HE are required. However, as analyses of the current states of research and practice show, a substantial research gap on assessment practices in HE still exists, calling for the development of effective solutions. These demands lead to significant conceptual and methodological challenges. Funded by the German Federal Ministry of Education and Research, the research program 'Modeling and Measuring Competencies in Higher Education – Validation and Methodological Challenges' (KoKoHs) focusses on addressing these challenges in HE assessment practice by modeling and validating objective test instruments. Including 16 cross-university collaborative projects, the German-wide research program contributes to bridging the research gap in current assessment research and practice by concentrating on practical and policy-related challenges of assessment in HE. In this paper, we present a differentiated overview of existing assessments of HE at the national and international level. Based on the state of research, we describe the theoretical and conceptual framework of the KoKoHs Program as well as results of the validation studies, including their key outcomes. More precisely, this includes an insight into more than 40 developed assessments covering a broad range of transparent and objective methods for validly measuring domain-specific and generic knowledge and skills for five major study areas (Economics, Social Science, Teacher Education, Medicine and Psychology). Computer-, video- and simulation-based instruments have been applied and validated to measure over 20,000 students at the beginning, middle and end of their (bachelor and master) studies at more than 300 HE institutions throughout Germany or during their practical training phase, traineeship or occupation. Focussing on the validity of the assessments, all test instruments have been analyzed comprehensively, using a broad range of methods and observing the validity criteria of the Standards for Psychological and Educational Testing developed by the American Educational Research Association, the American Economic Association and the National Council on Measurement. The results of the developed assessments presented in this paper, provide valuable outcomes to predict students’ skills and abilities at the beginning and the end of their studies as well as their learning development and performance. This allows for a differentiated view of the diversity among students. Based on the given research results practical implications and recommendations are formulated. In particular, appropriate and effective learning opportunities for students can be created to support the learning development of students, promote their individual potential and reduce knowledge and skill gaps. Overall, the presented research on competency assessment is highly relevant to national and international HE practice.

Keywords: 21st century skills, academic competencies, innovative assessments, KoKoHs

Procedia PDF Downloads 142
30 Using Participatory Action Research with Episodic Volunteers: Learning from Urban Agriculture Initiatives

Authors: Rebecca Laycock

Abstract:

Many Urban Agriculture (UA) initiatives, including community/allotment gardens, Community Supported Agriculture, and community/social farms, depend on volunteers. However, initiatives supported or run by volunteers are often faced with a high turnover of labour as a result of the involvement of episodic volunteers (a term describing ad hoc, one-time, and seasonal volunteers), leading to challenges with maintaining project continuity and retaining skills/knowledge within the initiative. This is a notable challenge given that food growing is a knowledge intensive activity where the fruits of labour appear months or sometimes years after investment. Participatory Action Research (PAR) is increasingly advocated for in the field of UA as a solution-oriented approach to research, providing concrete results in addition to advancing theory. PAR is a cyclical methodological approach involving researchers and stakeholders collaboratively 'identifying' and 'theorising' an issue, 'planning' an action to address said issue, 'taking action', and 'reflecting' on the process. Through iterative cycles and prolonged engagement, the theory is developed and actions become better tailored to the issue. The demand for PAR in UA research means that understanding how to use PAR with episodic volunteers is of critical importance. The aim of this paper is to explore (1) the challenges of doing PAR in UA initiatives with episodic volunteers, and (2) how PAR can be harnessed to advance sustainable development of UA through theoretically-informed action. A 2.5 year qualitative PAR study on three English case study student-led food growing initiatives took place between 2014 and 2016. University UA initiatives were chosen as exemplars because most of their volunteers were episodic. Data were collected through 13 interviews, 6 workshops, and a research diary. The results were thematically analysed through eclectic coding using Computer-Assisted Qualitative Data Analysis Software (NVivo). It was found that the challenges of doing PAR with transient participants were (1) a superficial understanding of issues by volunteers because of short term engagement, resulting in difficulties ‘identifying’/‘theorising’ issues to research; (2) difficulties implementing ‘actions’ given those involved in the ‘planning’ phase often left by the ‘action’ phase; (3) a lack of capacity of participants to engage in research given the ongoing challenge of maintaining participation; and (4) that the introduction of the researcher acted as an ‘intervention’. The involvement of a long-term stakeholder (the researcher) changed the group dynamics, prompted critical reflections that had not previously taken place, and improved continuity. This posed challenges for providing a genuine understanding the episodic volunteering PAR initiatives, and also challenged the notion of what constitutes an ‘intervention’ or ‘action’ in PAR. It is recommended that researchers working with episodic volunteers using PAR should (1) adopt a first-person approach by inquiring into the researcher’s own experience to enable depth in theoretical analysis to manage the potentially superficial understandings by short-term participants; and (2) establish safety mechanisms to address the potential for the research to impose artificial project continuity and knowledge retention that will end when the research does. Through these means, we can more effectively use PAR to conduct solution-oriented research about UA.

Keywords: community garden, continuity, first-person research, higher education, knowledge retention, project management, transience, university

Procedia PDF Downloads 251
29 Development and Implementation of An "Electric Island" Monitoring Infrastructure for Promoting Energy Efficiency in Schools

Authors: Vladislav Grigorovitch, Marina Grigorovitch, David Pearlmutter, Erez Gal

Abstract:

The concept of “electric island” is involved with achieving the balance between the self-power generation ability of each educational institution and energy consumption demand. Photo-Voltaic (PV) solar system installed on the roofs of educational buildings is a common way to absorb the available solar energy and generate electricity for self-consumption and even for returning to the grid. The main objective of this research is to develop and implement an “electric island” monitoring infrastructure for promoting energy efficiency in educational buildings. A microscale monitoring methodology will be developed to provide a platform to estimate energy consumption performance classified by rooms and subspaces rather than the more common macroscale monitoring of the whole building. The monitoring platform will be established on the experimental sites, enabling an estimation and further analysis of the variety of environmental and physical conditions. For each building, separate measurement configurations will be applied taking into account the specific requirements, restrictions, location and infrastructure issues. The direct results of the measurements will be analyzed to provide deeper understanding of the impact of environmental conditions and sustainability construction standards, not only on the energy demand of public building, but also on the energy consumption habits of the children that study in those schools and the educational and administrative staff that is responsible for providing the thermal comfort conditions and healthy studying atmosphere for the children. A monitoring methodology being developed in this research is providing online access to real-time data of Interferential Therapy (IFTs) from any mobile phone or computer by simply browsing the dedicated website, providing powerful tools for policy makers for better decision making while developing PV production infrastructure to achieve “electric islands” in educational buildings. A detailed measurement configuration was technically designed based on the specific conditions and restriction of each of the pilot buildings. A monitoring and analysis methodology includes a large variety of environmental parameters inside and outside the schools to investigate the impact of environmental conditions both on the energy performance of the school and educational abilities of the children. Indoor measurements are mandatory to acquire the energy consumption data, temperature, humidity, carbon dioxide and other air quality conditions in different parts of the building. In addition to that, we aim to study the awareness of the users to the energy consideration and thus the impact on their energy consumption habits. The monitoring of outdoor conditions is vital for proper design of the off-grid energy supply system and validation of its sufficient capacity. The suggested outcomes of this research include: 1. both experimental sites are designed to have PV production and storage capabilities; 2. Developing an online information feedback platform. The platform will provide consumer dedicated information to academic researchers, municipality officials and educational staff and students; 3. Designing an environmental work path for educational staff regarding optimal conditions and efficient hours for operating air conditioning, natural ventilation, closing of blinds, etc.

Keywords: sustainability, electric island, IOT, smart building

Procedia PDF Downloads 180
28 Calpoly Autonomous Transportation Experience: Software for Driverless Vehicle Operating on Campus

Authors: F. Tang, S. Boskovich, A. Raheja, Z. Aliyazicioglu, S. Bhandari, N. Tsuchiya

Abstract:

Calpoly Autonomous Transportation Experience (CATE) is a driverless vehicle that we are developing to provide safe, accessible, and efficient transportation of passengers throughout the Cal Poly Pomona campus for events such as orientation tours. Unlike the other self-driving vehicles that are usually developed to operate with other vehicles and reside only on the road networks, CATE will operate exclusively on walk-paths of the campus (potentially narrow passages) with pedestrians traveling from multiple locations. Safety becomes paramount as CATE operates within the same environment as pedestrians. As driverless vehicles assume greater roles in today’s transportation, this project will contribute to autonomous driving with pedestrian traffic in a highly dynamic environment. The CATE project requires significant interdisciplinary work. Researchers from mechanical engineering, electrical engineering and computer science are working together to attack the problem from different perspectives (hardware, software and system). In this abstract, we describe the software aspects of the project, with a focus on the requirements and the major components. CATE shall provide a GUI interface for the average user to interact with the car and access its available functionalities, such as selecting a destination from any origin on campus. We have developed an interface that provides an aerial view of the campus map, the current car location, routes, and the goal location. Users can interact with CATE through audio or manual inputs. CATE shall plan routes from the origin to the selected destination for the vehicle to travel. We will use an existing aerial map for the campus and convert it to a spatial graph configuration where the vertices represent the landmarks and edges represent paths that the car should follow with some designated behaviors (such as stay on the right side of the lane or follow an edge). Graph search algorithms such as A* will be implemented as the default path planning algorithm. D* Lite will be explored to efficiently recompute the path when there are any changes to the map. CATE shall avoid any static obstacles and walking pedestrians within some safe distance. Unlike traveling along traditional roadways, CATE’s route directly coexists with pedestrians. To ensure the safety of the pedestrians, we will use sensor fusion techniques that combine data from both lidar and stereo vision for obstacle avoidance while also allowing CATE to operate along its intended route. We will also build prediction models for pedestrian traffic patterns. CATE shall improve its location and work under a GPS-denied situation. CATE relies on its GPS to give its current location, which has a precision of a few meters. We have implemented an Unscented Kalman Filter (UKF) that allows the fusion of data from multiple sensors (such as GPS, IMU, odometry) in order to increase the confidence of localization. We also noticed that GPS signals can easily get degraded or blocked on campus due to high-rise buildings or trees. UKF can also help here to generate a better state estimate. In summary, CATE will provide on-campus transportation experience that coexists with dynamic pedestrian traffic. In future work, we will extend it to multi-vehicle scenarios.

Keywords: driverless vehicle, path planning, sensor fusion, state estimate

Procedia PDF Downloads 147
27 Overview of Research Contexts about XR Technologies in Architectural Practice

Authors: Adeline Stals

Abstract:

The transformation of architectural design practices has been underway for almost forty years due to the development and democratization of computer technology. New and more efficient tools are constantly being proposed to architects, amplifying a technological wave that sometimes stimulates them, sometimes overwhelms them, depending essentially on their digital culture and the context (socio-economic, structural, organizational) in which they work on a daily basis. Our focus is on VR, AR, and MR technologies dedicated to architecture. The commercialization of affordable headsets like the Oculus Rift, the HTC Vive or more low-tech like the Google CardBoard, makes it more accessible to benefit from these technologies. In that regard, researchers report the growing interest of these tools for architects, given the new perspectives they open up in terms of workflow, representation, collaboration, and client’s involvement. However, studies rarely mention the consequences of the sample studied on results. Our research provides an overview of VR, AR, and MR researches among a corpus of papers selected from conferences and journals. A closer look at the sample of these research projects highlights the necessity to take into consideration the context of studies in order to develop tools truly dedicated to the real practices of specific architect profiles. This literature review formalizes milestones for future challenges to address. The methodology applied is based on a systematic review of two sources of publications. The first one is the Cumincad database, which regroups publications from conferences exclusively about digital in architecture. Additionally, the second part of the corpus is based on journal publications. Journals have been selected considering their ranking on Scimago. Among the journals in the predefined category ‘architecture’ and in Quartile 1 for 2018 (last update when consulted), we have retained the ones related to the architectural design process: Design Studies, CoDesign, Architectural Science Review, Frontiers of Architectural Research and Archnet-IJAR. Beside those journals, IJAC, not classified in the ‘architecture’ category, is selected by the author for its adequacy with architecture and computing. For all requests, the search terms were ‘virtual reality’, ‘augmented reality’, and ‘mixed reality’ in title and/or keywords for papers published between 2015 and 2019 (included). This frame time is defined considering the fast evolution of these technologies in the past few years. Accordingly, the systematic review covers 202 publications. The literature review on studies about XR technologies establishes the state of the art of the current situation. It highlights that studies are mostly based on experimental contexts with controlled conditions (pedagogical, e.g.) or on practices established in large architectural offices of international renown. However, few studies focus on the strategies and practices developed by offices of smaller size, which represent the largest part of the market. Indeed, a European survey studying the architectural profession in Europe in 2018 reveals that 99% of offices are composed of less than ten people, and 71% of only one person. The study also showed that the number of medium-sized offices is continuously decreasing in favour of smaller structures. In doing so, a frontier seems to remain between the worlds of research and practice, especially for the majority of small architectural practices having a modest use of technology. This paper constitutes a reference for the next step of the research and for further worldwide researches by facilitating their contextualization.

Keywords: architectural design, literature review, SME, XR technologies

Procedia PDF Downloads 111
26 Digitization and Morphometric Characterization of Botanical Collection of Indian Arid Zones as Informatics Initiatives Addressing Conservation Issues in Climate Change Scenario

Authors: Dipankar Saha, J. P. Singh, C. B. Pandey

Abstract:

Indian Thar desert being the seventh largest in the world is the main hot sand desert occupies nearly 385,000km2 and about 9% of the area of the country harbours several species likely the flora of 682 species (63 introduced species) belonging to 352 genera and 87 families. The degree of endemism of plant species in the Thar desert is 6.4 percent, which is relatively higher than the degree of endemism in the Sahara desert which is very significant for the conservationist to envisage. The advent and development of computer technology for digitization and data base management coupled with the rapidly increasing importance of biodiversity conservation resulted in the invention of biodiversity informatics as discipline of basic sciences with multiple applications. Aichi Target 19 as an outcome of Convention of Biological Diversity (CBD) specifically mandates the development of an advanced and shared biodiversity knowledge base. Information on species distributions in space is the crux of effective management of biodiversity in the rapidly changing world. The efficiency of biodiversity management is being increased rapidly by various stakeholders like researchers, policymakers, and funding agencies with the knowledge and application of biodiversity informatics. Herbarium specimens being a vital repository for biodiversity conservation especially in climate change scenario the digitization process usually aims to improve access and to preserve delicate specimens and in doing so creating large sets of images as a part of the existing repository as arid plant information facility for long-term future usage. As the leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens as well. As a part of this activity, laminar characterization (leaves being the most important characters in assessing climate change impact) initially resulted in classification of more than thousands collections belonging to ten families like Acanthaceae, Aizoaceae, Amaranthaceae, Asclepiadaceae, Anacardeaceae, Apocynaceae, Asteraceae, Aristolochiaceae, Berseraceae and Bignoniaceae etc. Taxonomic diversity indices has also been worked out being one of the important domain of biodiversity informatics approaches. The digitization process also encompasses workflows which incorporate automated systems to enable us to expand and speed up the digitisation process. The digitisation workflows used to be on a modular system which has the potential to be scaled up. As they are being developed with a geo-referencing tool and additional quality control elements and finally placing specimen images and data into a fully searchable, web-accessible database. Our effort in this paper is to elucidate the role of BIs, present effort of database development of the existing botanical collection of institute repository. This effort is expected to be considered as a part of various global initiatives having an effective biodiversity information facility. This will enable access to plant biodiversity data that are fit-for-use by scientists and decision makers working on biodiversity conservation and sustainable development in the region and iso-climatic situation of the world.

Keywords: biodiversity informatics, climate change, digitization, herbarium, laminar characters, web accessible interface

Procedia PDF Downloads 231
25 A Review on Cyberchondria Based on Bibliometric Analysis

Authors: Xiaoqing Peng, Aijing Luo, Yang Chen

Abstract:

Background: Cyberchondria, as an "emerging risk" accompanied by the information era, is a new abnormal pattern characterized by excessive or repeated online searches for health-related information and escalating health anxiety, which endangers people's physical and mental health and poses a huge threat to public health. Objective: To explore and discuss the research status, hotspots and trends of Cyberchondria. Methods: Based on a total of 77 articles regarding "Cyberchondria" extracted from Web of Science from the beginning till October 2019, the literature trends, countries, institutions, hotspots are analyzed by bibliometric analysis, the concept definition of Cyberchondria, instruments, relevant factors, treatment and intervention are discussed as well. Results: Since "Cyberchondria" was put forward for the first time in 2001, the last two decades witnessed a noticeable increase in the amount of literature, especially during 2014-2019, it quadrupled dramatically at 62 compared with that before 2014 only at 15, which shows that Cyberchondria has become a new theme and hot topic in recent years. The United States was the most active contributor with the largest publication (23), followed by England (11) and Australia (11), while the leading institutions were Baylor University(7) and University of Sydney(7), followed by Florida State University(4) and University of Manchester(4). The WoS categories "Psychiatry/Psychology " and "Computer/ Information Science "were the areas of greatest influence. The concept definition of Cyberchondria is not completely unified in the world, but it is generally considered as an abnormal behavioral pattern and emotional state and has been invoked to refer to the anxiety-amplifying effects of online health-related searches. The first and the most frequently cited scale for measuring the severity of Cyberchondria called “The Cyberchondria Severity Scale (CSS) ”was developed in 2014, which conceptualized Cyberchondria as a multidimensional construct consisting of compulsion, distress, excessiveness, reassurance, and mistrust of medical professionals which was proved to be not necessary for this construct later. Since then, the Brazilian, German, Turkish, Polish and Chinese versions were subsequently developed, improved and culturally adjusted, while CSS was optimized to a simplified version (CSS-12) in 2019, all of which should be worthy of further verification. The hotspots of Cyberchondria mainly focuses on relevant factors as follows: intolerance of uncertainty, anxiety sensitivity, obsessive-compulsive disorder, internet addition, abnormal illness behavior, Whiteley index, problematic internet use, trying to make clear the role played by “associated factors” and “anxiety-amplifying factors” in the development of Cyberchondria, to better understand the aetiological links and pathways in the relationships between hypochondriasis, health anxiety and online health-related searches. Although the treatment and intervention of Cyberchondria are still in the initial stage of exploration, there are kinds of meaningful attempts to seek effective strategies from different aspects such as online psychological treatment, network technology management, health information literacy improvement and public health service. Conclusion: Research on Cyberchondria is in its infancy but should be deserved more attention. A conceptual consensus on Cyberchondria, a refined assessment tool, prospective studies conducted in various populations, targeted treatments for it would be the main research direction in the near future.

Keywords: cyberchondria, hypochondriasis, health anxiety, online health-related searches

Procedia PDF Downloads 124
24 From Modelled Design to Reality through Material and Machinery Lab and Field Tests: Porous Concrete Carparks at the Wanda Metropolitano Stadium in Madrid

Authors: Manuel de Pazos-Liano, Manuel Cifuentes-Antonio, Juan Fisac-Gozalo, Sara Perales-Momparler, Carlos Martinez-Montero

Abstract:

The first-ever game in the Wanda Metropolitano Stadium, the new home of the Club Atletico de Madrid, was played on September 16, 2017, thanks to the work of a multidisciplinary team that made it possible to combine urban development with sustainability goals. The new football ground sits on a 1.2 km² land owned by the city of Madrid. Its construction has dramatically increased the sealed area of the site (transforming the runoff coefficient from 0.35 to 0.9), and the surrounding sewer network has no capacity for that extra flow. As an alternative to enlarge the existing 2.5 m diameter pipes, it was decided to detain runoff on site by means of an integrated and durable infrastructure that would not blow up the construction cost nor represent a burden on the municipality’s maintenance tasks. Instead of the more conventional option of building a large concrete detention tank, the decision was taken on the use of pervious pavement on the 3013 car parking spaces for sub-surface water storage, a solution aligned with the city water ordinance and the Madrid + Natural project. Making the idea a reality, in only five months and during the summer season (which forced to pour the porous concrete only overnight), was a challenge never faced before in Spain, that required of innovation both at the material as well as the machinery side. The process consisted on: a) defining the characteristics required for the porous concrete (compressive strength of 15 N/mm2 and 20% voids); b) testing of different porous concrete dosages at the construction company laboratory; c) stablishing the cross section in order to provide structural strength and sufficient water detention capacity (20 cm porous concrete over a 5 cm 5/10 gravel, that sits on a 50 cm coarse 40/50 aggregate sub-base separated by a virgin fiber polypropylene geotextile fabric); d) hydraulic computer modelling (using the Full Hydrograph Method based on the Wallingford Procedure) to estimate design peak flows decrease (an average of 69% at the three car parking lots); e) use of a variety of machinery for the application of the porous concrete to achieve both structural strength and permeable surface (including an inverse rotating rolling imported from USA, and the so-called CMI, a sliding concrete paver used in the construction of motorways with rigid pavements); f) full-scale pilots and final construction testing by an accredited laboratory (pavement compressive strength average value of 15 N/mm2 and 0,0032 m/s permeability). The continuous testing and innovating construction process explained in detail within this article, allowed for a growing performance with time, finally proving the use of the CMI valid also for large porous car park applications. All this process resulted in a successful story that converts the Wanda Metropolitano Stadium into a great demonstration site that will help the application of the Spanish Royal Decree 638/2016 (it also counts with rainwater harvesting for grass irrigation).

Keywords: construction machinery, permeable carpark, porous concrete, SUDS, sustainable develpoment

Procedia PDF Downloads 145
23 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task

Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes

Abstract:

For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.

Keywords: Alzheimer's disease, keystroke logging, matching, writing process

Procedia PDF Downloads 366
22 Utilization of Informatics to Transform Clinical Data into a Simplified Reporting System to Examine the Analgesic Prescribing Practices of a Single Urban Hospital’s Emergency Department

Authors: Rubaiat S. Ahmed, Jemer Garrido, Sergey M. Motov

Abstract:

Clinical informatics (CI) enables the transformation of data into a systematic organization that improves the quality of care and the generation of positive health outcomes.Innovative technology through informatics that compiles accurate data on analgesic utilization in the emergency department can enhance pain management in this important clinical setting. We aim to establish a simplified reporting system through CI to examine and assess the analgesic prescribing practices in the EDthrough executing a U.S. federal grant project on opioid reduction initiatives. Queried data points of interest from a level-one trauma ED’s electronic medical records were used to create data sets and develop informational/visual reporting dashboards (on Microsoft Excel and Google Sheets) concerning analgesic usage across several pre-defined parameters and performance metrics using CI. The data was then qualitatively analyzed to evaluate ED analgesic prescribing trends by departmental clinicians and leadership. During a 12-month reporting period (Dec. 1, 2020 – Nov. 30, 2021) for the ongoing project, about 41% of all ED patient visits (N = 91,747) were for pain conditions, of which 81.6% received analgesics in the ED and at discharge (D/C). Of those treated with analgesics, 24.3% received opioids compared to 75.7% receiving opioid alternatives in the ED and at D/C, including non-pharmacological modalities. Demographics showed among patients receiving analgesics, 56.7% were aged between 18-64, 51.8% were male, 51.7% were white, and 66.2% had government funded health insurance. Ninety-one percent of all opioids prescribed were in the ED, with intravenous (IV) morphine, IV fentanyl, and morphine sulfate immediate release (MSIR) tablets accounting for 88.0% of ED dispensed opioids. With 9.3% of all opioids prescribed at D/C, MSIR was dispensed 72.1% of the time. Hydrocodone, oxycodone, and tramadol usage to only 10-15% of the time, and hydromorphone at 0%. Of opioid alternatives, non-steroidal anti-inflammatory drugs were utilized 60.3% of the time, 23.5% with local anesthetics and ultrasound-guided nerve blocks, and 7.9% with acetaminophen as the primary non-opioid drug categories prescribed by ED providers. Non-pharmacological analgesia included virtual reality and other modalities. An average of 18.5 ED opioid orders and 1.9 opioid D/C prescriptions per 102.4 daily ED patient visits was observed for the period. Compared to other specialties within our institution, 2.0% of opioid D/C prescriptions are given by ED providers, compared to the national average of 4.8%. Opioid alternatives accounted for 69.7% and 30.3% usage, versus 90.7% and 9.3% for opioids in the ED and D/C, respectively.There is a pressing need for concise, relevant, and reliable clinical data on analgesic utilization for ED providers and leadership to evaluate prescribing practices and make data-driven decisions. Basic computer software can be used to create effective visual reporting dashboards with indicators that convey relevant and timely information in an easy-to-digest manner. We accurately examined our ED's analgesic prescribing practices using CI through dashboard reporting. Such reporting tools can quickly identify key performance indicators and prioritize data to enhance pain management and promote safe prescribing practices in the emergency setting.

Keywords: clinical informatics, dashboards, emergency department, health informatics, healthcare informatics, medical informatics, opioids, pain management, technology

Procedia PDF Downloads 145
21 Introducing Transport Engineering through Blended Learning Initiatives

Authors: Kasun P. Wijayaratna, Lauren Gardner, Taha Hossein Rashidi

Abstract:

Undergraduate students entering university across the last 2 to 3 years tend to be born during the middle years of the 1990s. This generation of students has been exposed to the internet and the desire and dependency on technology since childhood. Brains develop based on environmental influences and technology has wired this generation of student to be attuned to sophisticated complex visual imagery, indicating visual forms of learning may be more effective than the traditional lecture or discussion formats. Furthermore, post-millennials perspectives on career are not focused solely on stability and income but are strongly driven by interest, entrepreneurship and innovation. Accordingly, it is important for educators to acknowledge the generational shift and tailor the delivery of learning material to meet the expectations of the students and the needs of industry. In the context of transport engineering, effectively teaching undergraduate students the basic principles of transport planning, traffic engineering and highway design is fundamental to the progression of the profession from a practice and research perspective. Recent developments in technology have transformed the discipline as practitioners and researchers move away from the traditional “pen and paper” approach to methods involving the use of computer programs and simulation. Further, enhanced accessibility of technology for students has changed the way they understand and learn material being delivered at tertiary education institutions. As a consequence, blended learning approaches, which aim to integrate face to face teaching with flexible self-paced learning resources, have become prevalent to provide scalable education that satisfies the expectations of students. This research study involved the development of a series of ‘Blended Learning’ initiatives implemented within an introductory transport planning and geometric design course, CVEN2401: Sustainable Transport and Highway Engineering, taught at the University of New South Wales, Australia. CVEN2401 was modified by conducting interactive polling exercises during lectures, including weekly online quizzes, offering a series of supplementary learning videos, and implementing a realistic design project that students needed to complete using modelling software that is widely used in practice. These activities and resources were aimed to improve the learning environment for a large class size in excess of 450 students and to ensure that practical industry valued skills were introduced. The case study compared the 2016 and 2017 student cohorts based on their performance across assessment tasks as well as their reception to the material revealed through student feedback surveys. The initiatives were well received with a number of students commenting on the ability to complete self-paced learning and an appreciation of the exposure to a realistic design project. From an educator’s perspective, blending the course made it feasible to interact and engage with students. Personalised learning opportunities were made available whilst delivering a considerable volume of complex content essential for all undergraduate Civil and Environmental Engineering students. Overall, this case study highlights the value of blended learning initiatives, especially in the context of large class size university courses.

Keywords: blended learning, highway design, teaching, transport planning

Procedia PDF Downloads 149
20 Design and Implementation of an Affordable Electronic Medical Records in a Rural Healthcare Setting: A Qualitative Intrinsic Phenomenon Case Study

Authors: Nitika Sharma, Yogesh Jain

Abstract:

Introduction: An efficient Information System helps in improving the service delivery as well provides the foundation for policy and regulation of other building blocks of Health System. Health care organizations require an integrated working of its various sub-systems. An efficient EMR software boosts the teamwork amongst the various sub-systems thereby resulting in improved service delivery. Although there has been a huge impetus to EMR under the Digital India initiative, it has still not been mandated in India. It is generally implemented in huge funded public or private healthcare organizations only. Objective: The study was conducted to understand the factors that lead to the successful adoption of an affordable EMR in the low level healthcare organization. It intended to understand the design of the EMR and address the solutions to the challenges faced in adoption of the EMR. Methodology: The study was conducted in a non-profit registered Healthcare organization that has been providing healthcare facilities to more than 2500 villages including certain areas that are difficult to access. The data was collected with help of field notes, in-depth interviews and participant observation. A total of 16 participants using the EMR from different departments were enrolled via purposive sampling technique. The participants included in the study were working in the organization before the implementation of the EMR system. The study was conducted in one month period from 25 June-20 July 2018. The Ethical approval was taken from the institute along with prior approval of the participants. Data analysis: A word document of more than 4000 words was obtained after transcribing and translating the answers of respondents. It was further analyzed by focused coding, a line by line review of the transcripts, underlining words, phrases or sentences that might suggest themes to do thematic narrative analysis. Results: Based on the answers the results were thematically grouped under four headings: 1. governance of organization, 2. architecture and design of the software, 3. features of the software, 4. challenges faced in adoption and the solutions to address them. It was inferred that the successful implementation was attributed to the easy and comprehensive design of the system which has facilitated not only easy data storage and retrieval but contributes in constructing a decision support system for the staff. Portability has lead to increased acceptance by physicians. The proper division of labor, increased efficiency of staff, incorporation of auto-correction features and facilitation of task shifting has lead to increased acceptance amongst the users of various departments. Geographical inhibitions, low computer literacy and high patient load were the major challenges faced during its implementation. Despite of dual efforts made both by the architects and administrators to combat these challenges, there are still certain ongoing challenges faced by organization. Conclusion: Whenever any new technology is adopted there are certain innovators, early adopters, late adopters and laggards. The same pattern was followed in adoption of this software. He challenges were overcome with joint efforts of organization administrators and users as well. Thereby this case study provides a framework of implementing similar systems in public sector of countries that are struggling for digitizing the healthcare in presence of crunch of human and financial resources.

Keywords: EMR, healthcare technology, e-health, EHR

Procedia PDF Downloads 106
19 Extension of Moral Agency to Artificial Agents

Authors: Sofia Quaglia, Carmine Di Martino, Brendan Tierney

Abstract:

Artificial Intelligence (A.I.) constitutes various aspects of modern life, from the Machine Learning algorithms predicting the stocks on Wall streets to the killing of belligerents and innocents alike on the battlefield. Moreover, the end goal is to create autonomous A.I.; this means that the presence of humans in the decision-making process will be absent. The question comes naturally: when an A.I. does something wrong when its behavior is harmful to the community and its actions go against the law, which is to be held responsible? This research’s subject matter in A.I. and Robot Ethics focuses mainly on Robot Rights and its ultimate objective is to answer the questions: (i) What is the function of rights? (ii) Who is a right holder, what is personhood and the requirements needed to be a moral agent (therefore, accountable for responsibility)? (iii) Can an A.I. be a moral agent? (ontological requirements) and finally (iv) if it ought to be one (ethical implications). With the direction to answer this question, this research project was done via a collaboration between the School of Computer Science in the Technical University of Dublin that oversaw the technical aspects of this work, as well as the Department of Philosophy in the University of Milan, who supervised the philosophical framework and argumentation of the project. Firstly, it was found that all rights are positive and based on consensus; they change with time based on circumstances. Their function is to protect the social fabric and avoid dangerous situations. The same goes for the requirements considered necessary to be a moral agent: those are not absolute; in fact, they are constantly redesigned. Hence, the next logical step was to identify what requirements are regarded as fundamental in real-world judicial systems, comparing them to that of ones used in philosophy. Autonomy, free will, intentionality, consciousness and responsibility were identified as the requirements to be considered a moral agent. The work went on to build a symmetrical system between personhood and A.I. to enable the emergence of the ontological differences between the two. Each requirement is introduced, explained in the most relevant theories of contemporary philosophy, and observed in its manifestation in A.I. Finally, after completing the philosophical and technical analysis, conclusions were drawn. As underlined in the research questions, there are two issues regarding the assignment of moral agency to artificial agent: the first being that all the ontological requirements must be present and secondly being present or not, whether an A.I. ought to be considered as an artificial moral agent. From an ontological point of view, it is very hard to prove that an A.I. could be autonomous, free, intentional, conscious, and responsible. The philosophical accounts are often very theoretical and inconclusive, making it difficult to fully detect these requirements on an experimental level of demonstration. However, from an ethical point of view it makes sense to consider some A.I. as artificial moral agents, hence responsible for their own actions. When considering artificial agents as responsible, there can be applied already existing norms in our judicial system such as removing them from society, and re-educating them, in order to re-introduced them to society. This is in line with how the highest profile correctional facilities ought to work. Noticeably, this is a provisional conclusion and research must continue further. Nevertheless, the strength of the presented argument lies in its immediate applicability to real world scenarios. To refer to the aforementioned incidents, involving the murderer of innocents, when this thesis is applied it is possible to hold an A.I. accountable and responsible for its actions. This infers removing it from society by virtue of its un-usability, re-programming it and, only when properly functioning, re-introducing it successfully

Keywords: artificial agency, correctional system, ethics, natural agency, responsibility

Procedia PDF Downloads 190