Search results for: Galileo new advanced features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5719

Search results for: Galileo new advanced features

229 Developing Creative and Critically Reflective Digital Learning Communities

Authors: W. S. Barber, S. L. King

Abstract:

This paper is a qualitative case study analysis of the development of a fully online learning community of graduate students through arts-based community building activities. With increasing numbers and types of online learning spaces, it is incumbent upon educators to continue to push the edge of what best practices look like in digital learning environments. In digital learning spaces, instructors can no longer be seen as purveyors of content knowledge to be examined at the end of a set course by a final test or exam. The rapid and fluid dissemination of information via Web 3.0 demands that we reshape our approach to teaching and learning, from one that is content-focused to one that is process-driven. Rather than having instructors as formal leaders, today’s digital learning environments require us to share expertise, as it is the collective experiences and knowledge of all students together with the instructors that help to create a very different kind of learning community. This paper focuses on innovations pursued in a 36 hour 12 week graduate course in higher education entitled “Critical and Reflective Practice”. The authors chronicle their journey to developing a fully online learning community (FOLC) by emphasizing the elements of social, cognitive, emotional and digital spaces that form a moving interplay through the community. In this way, students embrace anywhere anytime learning and often take the learning, as well as the relationships they build and skills they acquire, beyond the digital class into real world situations. We argue that in order to increase student online engagement, pedagogical approaches need to stem from two primary elements, both creativity and critical reflection, that are essential pillars upon which instructors can co-design learning environments with students. The theoretical framework for the paper is based on the interaction and interdependence of Creativity, Intuition, Critical Reflection, Social Constructivism and FOLCs. By leveraging students’ embedded familiarity with a wide variety of technologies, this case study of a graduate level course on critical reflection in education, examines how relationships, quality of work produced, and student engagement can improve by using creative and imaginative pedagogical strategies. The authors examine their professional pedagogical strategies through the lens that the teacher acts as facilitator, guide and co-designer. In a world where students can easily search for and organize information as self-directed processes, creativity and connection can at times be lost in the digitized course environment. The paper concludes by posing further questions as to how institutions of higher education may be challenged to restructure their credit granting courses into more flexible modules, and how students need to be considered an important part of assessment and evaluation strategies. By introducing creativity and critical reflection as central features of the digital learning spaces, notions of best practices in digital teaching and learning emerge.

Keywords: online, pedagogy, learning, communities

Procedia PDF Downloads 378
228 Mapping Vulnerabilities: A Social and Political Study of Disasters in Eastern Himalayas, Region of Darjeeling

Authors: Shailendra M. Pradhan, Upendra M. Pradhan

Abstract:

Disasters are perennial features of human civilization. The recurring earthquakes, floods, cyclones, among others, that result in massive loss of lives and devastation, is a grim reminder of the fact that, despite all our success stories of development, and progress in science and technology, human society is perennially at risk to disasters. The apparent threat of climate change and global warming only severe our disaster risks. Darjeeling hills, situated along Eastern Himalayan region of India, and famous for its three Ts – tea, tourism and toy-train – is also equally notorious for its disasters. The recurring landslides and earthquakes, the cyclone Aila, and the Ambootia landslides, considered as the largest landslide in Asia, are strong evidence of the vulnerability of Darjeeling hills to natural disasters. Given its geographical location along the Hindu-Kush Himalayas, the region is marked by rugged topography, geo-physically unstable structure, high-seismicity, and fragile landscape, making it prone to disasters of different kinds and magnitudes. Most of the studies on disasters in Darjeeling hills are, however, scientific and geographical in orientation that focuses on the underlying geological and physical processes to the neglect of social and political conditions. This has created a tendency among the researchers and policy-makers to endorse and promote a particular type of discourse that does not consider the social and political aspects of disasters in Darjeeling hills. Disaster, this paper argues, is a complex phenomenon, and a result of diverse factors, both physical and human. The hazards caused by the physical and geological agents, and the vulnerabilities produced and rooted in political, economic, social and cultural structures of a society, together result in disasters. In this sense, disasters are as much a result of political and economic conditions as it is of physical environment. The human aspect of disasters, therefore, compels us to address intricating social and political challenges that ultimately determine our resilience and vulnerability to disasters. Set within the above milieu, the aims of the paper are twofold: a) to provide a political and sociological account of disasters in Darjeeling hills; and, b) to identify and address the root causes of its vulnerabilities to disasters. In situating disasters in Darjeeling Hills, the paper adopts the Pressure and Release Model (PAR) that provides a theoretical insight into the study of social and political aspects of disasters, and to examine myriads of other related issues therein. The PAR model conceptualises risk as a complex combination of vulnerabilities, on the one hand, and hazards, on the other. Disasters, within the PAR framework, occur when hazards interact with vulnerabilities. The root causes of vulnerability, in turn, could be traced to social and political structures such as legal definitions of rights, gender relations, and other ideological structures and processes. In this way, the PAR model helps the present study to identify and unpack the root causes of vulnerabilities and disasters in Darjeeling hills that have largely remained neglected in dominant discourses, thereby providing a more nuanced and sociologically sensitive understanding of disasters.

Keywords: Darjeeling, disasters, PAR, vulnerabilities

Procedia PDF Downloads 251
227 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)

Authors: Getaneh Berie Tarekegn, Li-Chia Tai

Abstract:

With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 13
226 Classification Using Worldview-2 Imagery of Giant Panda Habitat in Wolong, Sichuan Province, China

Authors: Yunwei Tang, Linhai Jing, Hui Li, Qingjie Liu, Xiuxia Li, Qi Yan, Haifeng Ding

Abstract:

The giant panda (Ailuropoda melanoleuca) is an endangered species, mainly live in central China, where bamboos act as the main food source of wild giant pandas. Knowledge of spatial distribution of bamboos therefore becomes important for identifying the habitat of giant pandas. There have been ongoing studies for mapping bamboos and other tree species using remote sensing. WorldView-2 (WV-2) is the first high resolution commercial satellite with eight Multi-Spectral (MS) bands. Recent studies demonstrated that WV-2 imagery has a high potential in classification of tree species. The advanced classification techniques are important for utilising high spatial resolution imagery. It is generally agreed that object-based image analysis is a more desirable method than pixel-based analysis in processing high spatial resolution remotely sensed data. Classifiers that use spatial information combined with spectral information are known as contextual classifiers. It is suggested that contextual classifiers can achieve greater accuracy than non-contextual classifiers. Thus, spatial correlation can be incorporated into classifiers to improve classification results. The study area is located at Wuyipeng area in Wolong, Sichuan Province. The complex environment makes it difficult for information extraction since bamboos are sparsely distributed, mixed with brushes, and covered by other trees. Extensive fieldworks in Wuyingpeng were carried out twice. The first one was on 11th June, 2014, aiming at sampling feature locations for geometric correction and collecting training samples for classification. The second fieldwork was on 11th September, 2014, for the purposes of testing the classification results. In this study, spectral separability analysis was first performed to select appropriate MS bands for classification. Also, the reflectance analysis provided information for expanding sample points under the circumstance of knowing only a few. Then, a spatially weighted object-based k-nearest neighbour (k-NN) classifier was applied to the selected MS bands to identify seven land cover types (bamboo, conifer, broadleaf, mixed forest, brush, bare land, and shadow), accounting for spatial correlation within classes using geostatistical modelling. The spatially weighted k-NN method was compared with three alternatives: the traditional k-NN classifier, the Support Vector Machine (SVM) method and the Classification and Regression Tree (CART). Through field validation, it was proved that the classification result obtained using the spatially weighted k-NN method has the highest overall classification accuracy (77.61%) and Kappa coefficient (0.729); the producer’s accuracy and user’s accuracy achieve 81.25% and 95.12% for the bamboo class, respectively, also higher than the other methods. Photos of tree crowns were taken at sample locations using a fisheye camera, so the canopy density could be estimated. It is found that it is difficult to identify bamboo in the areas with a large canopy density (over 0.70); it is possible to extract bamboos in the areas with a median canopy density (from 0.2 to 0.7) and in a sparse forest (canopy density is less than 0.2). In summary, this study explores the ability of WV-2 imagery for bamboo extraction in a mountainous region in Sichuan. The study successfully identified the bamboo distribution, providing supporting knowledge for assessing the habitats of giant pandas.

Keywords: bamboo mapping, classification, geostatistics, k-NN, worldview-2

Procedia PDF Downloads 292
225 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming

Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter

Abstract:

High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.

Keywords: hyperelastic, anisotropic, polymer film, thermoforming

Procedia PDF Downloads 597
224 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 235
223 TeleEmergency Medicine: Transforming Acute Care through Virtual Technology

Authors: Ashley L. Freeman, Jessica D. Watkins

Abstract:

TeleEmergency Medicine (TeleEM) is an innovative approach leveraging virtual technology to deliver specialized emergency medical care across diverse healthcare settings, including internal acute care and critical access hospitals, remote patient monitoring, and nurse triage escalation, in addition to external emergency departments, skilled nursing facilities, and community health centers. TeleEM represents a significant advancement in the delivery of emergency medical care, providing healthcare professionals the capability to deliver expertise that closely mirrors in-person emergency medicine, exceeding geographical boundaries. Through qualitative research, the extension of timely, high-quality care has proven to address the critical needs of patients in remote and underserved areas. TeleEM’s service design allows for the expansion of existing services and the establishment of new ones in diverse geographic locations. This ensures that healthcare institutions can readily scale and adapt services to evolving community requirements by leveraging on-demand (non-scheduled) telemedicine visits through the deployment of multiple video solutions. In terms of financial management, TeleEM currently employs billing suppression and subscription models to enhance accessibility for a wide range of healthcare facilities. Plans are in motion to transition to a billing system routing charges through a third-party vendor, further enhancing financial management flexibility. To address state licensure concerns, a patient location verification process has been integrated through legal counsel and compliance authorities' guidance. The TeleEM workflow is designed to terminate if the patient is not physically located within licensed regions at the time of the virtual connection, alleviating legal uncertainties. A distinctive and pivotal feature of TeleEM is the introduction of the TeleEmergency Medicine Care Team Assistant (TeleCTA) role. TeleCTAs collaborate closely with TeleEM Physicians, leading to enhanced service activation, streamlined coordination, and workflow and data efficiencies. In the last year, more than 800 TeleEM sessions have been conducted, of which 680 were initiated by internal acute care and critical access hospitals, as evidenced by quantitative research. Without this service, many of these cases would have necessitated patient transfers. Barriers to success were examined through thorough medical record review and data analysis, which identified inaccuracies in documentation leading to activation delays, limitations in billing capabilities, and data distortion, as well as the intricacies of managing varying workflows and device setups. TeleEM represents a transformative advancement in emergency medical care that nurtures collaboration and innovation. Not only has advanced the delivery of emergency medicine care virtual technology through focus group participation with key stakeholders, rigorous attention to legal and financial considerations, and the implementation of robust documentation tools and the TeleCTA role, but it’s also set the stage for overcoming geographic limitations. TeleEM assumes a notable position in the field of telemedicine by enhancing patient outcomes and expanding access to emergency medical care while mitigating licensure risks and ensuring compliant billing.

Keywords: emergency medicine, TeleEM, rural healthcare, telemedicine

Procedia PDF Downloads 51
222 The Roles of Mandarin and Local Dialect in the Acquisition of L2 English Consonants Among Chinese Learners of English: Evidence From Suzhou Dialect Areas

Authors: Weijing Zhou, Yuting Lei, Francis Nolan

Abstract:

In the domain of second language acquisition, whenever pronunciation errors or acquisition difficulties are found, researchers habitually attribute them to the negative transfer of the native language or local dialect. To what extent do Mandarin and local dialects affect English phonological acquisition for Chinese learners of English as a foreign language (EFL)? Little evidence, however, has been found via empirical research in China. To address this core issue, the present study conducted phonetic experiments to explore the roles of local dialects and Mandarin in Chinese EFL learners’ acquisition of L2 English consonants. Besides Mandarin, the sole national language in China, Suzhou dialect was selected as the target local dialect because of its distinct phonology from Mandarin. The experimental group consisted of 30 junior English majors at Yangzhou University, who were born and lived in Suzhou, acquired Suzhou Dialect since their early childhood, and were able to communicate freely and fluently with each other in Suzhou Dialect, Mandarin as well as English. The consonantal target segments were all the consonants of English, Mandarin and Suzhou Dialect in typical carrier words embedded in the carrier sentence Say again. The control group consisted of two Suzhou Dialect experts, two Mandarin radio broadcasters, and two British RP phoneticians, who served as the standard speakers of the three languages. The reading corpus was recorded and sampled in the phonetic laboratories at Yangzhou University, Soochow University and Cambridge University, respectively, then transcribed, segmented and analyzed acoustically via Praat software, and finally analyzed statistically via EXCEL and SPSS software. The main findings are as follows: First, in terms of correct acquisition rates (CARs) of all the consonants, Mandarin ranked top (92.83%), English second (74.81%) and Suzhou Dialect last (70.35%), and significant differences were found only between the CARs of Mandarin and English and between the CARs of Mandarin and Suzhou Dialect, demonstrating Mandarin was overwhelmingly more robust than English or Suzhou Dialect in subjects’ multilingual phonological ecology. Second, in terms of typical acoustic features, the average duration of all the consonants plus the voice onset time (VOT) of plosives, fricatives, and affricatives in 3 languages were much longer than those of standard speakers; the intensities of English fricatives and affricatives were higher than RP speakers but lower than Mandarin and Suzhou Dialect standard speakers; the formants of English nasals and approximants were significantly different from those of Mandarin and Suzhou Dialects, illustrating the inconsistent acoustic variations between the 3 languages. Thirdly, in terms of typical pronunciation variations or errors, there were significant interlingual interactions between the 3 consonant systems, in which Mandarin consonants were absolutely dominant, accounting for the strong transfer from L1 Mandarin to L2 English instead of from earlier-acquired L1 local dialect to L2 English. This is largely because the subjects were knowingly exposed to Mandarin since their nursery and were strictly required to speak in Mandarin through all the formal education periods from primary school to university.

Keywords: acquisition of L2 English consonants, role of Mandarin, role of local dialect, Chinese EFL learners from Suzhou Dialect areas

Procedia PDF Downloads 69
221 Identification of Hub Genes in the Development of Atherosclerosis

Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia

Abstract:

Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.

Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics

Procedia PDF Downloads 36
220 Ensuring Safety in Fire Evacuation by Facilitating Way-Finding in Complex Buildings

Authors: Atefeh Omidkhah, Mohammadreza Bemanian

Abstract:

The issue of way-finding earmarks a wide range of literature in architecture and despite the 50 year background of way-finding studies, it still lacks a comprehensive theory for indoor settings. Way-finding has a notable role in emergency evacuation as well. People in the panic situation of a fire emergency need to find the safe egress route correctly and in as minimum time as possible. In this regard the parameters of an appropriate way-finding are mentioned in the evacuation related researches albeit scattered. This study reviews the fire safety related literature to extract a way-finding related framework for architectural purposes of the design of a safe evacuation route. In this regard a research trend review in addition with applied methodological approaches review is conducted. Then by analyzing eight original researches related to way-finding parameters in fire evacuation, main parameters that affect way-finding in emergency situation of a fire incident are extracted and a framework was developed based on them. Results show that the issues related to exit route and emergency evacuation can be chased in task oriented studies of way-finding. This research trend aims to access a high-level framework and in the best condition a theory that has an explanatory capability to define differences in way-finding in indoor/outdoor settings, complex/simple buildings and different building types or transitional spaces. The methodological advances demonstrate the evacuation way-finding researches in line with three approaches that the latter one is the most up-to-date and precise method to research this subject: real actors and hypothetical stimuli as in evacuation experiments, hypothetical actors and stimuli as in agent-based simulations and real actors and semi-real stimuli as in virtual reality environment by adding multi-sensory simulation. Findings on data-mining of 8 sample of original researches in way-finding in evacuation indicate that emergency way-finding design of a building should consider two level of space cognition problems in the time of emergency and performance consequences of them in the built environment. So four major classes of problems in way-finding which are visual information deficiency, confusing layout configuration, improper navigating signage and demographic issues had been defined and discussed as the main parameters that should be provided with solutions in design and interior of a building. In the design phase of complex buildings, which face more reported problem in way-finding, it is important to consider the interior components regarding to the building type of occupancy and behavior of its occupants and determine components that tend to become landmarks and set the architectural features of egress route in line with the directions that they navigate people. Research on topological cognition of environmental and its effect on way-finding task in emergency evacuation is proposed for future.

Keywords: architectural design, egress route, way-finding, fire safety, evacuation

Procedia PDF Downloads 152
219 Augmented and Virtual Reality Experiences in Plant and Agriculture Science Education

Authors: Sandra Arango-Caro, Kristine Callis-Duehl

Abstract:

The Education Research and Outreach Lab at the Donald Danforth Plant Science Center established the Plant and Agriculture Augmented and Virtual Reality Learning Laboratory (PAVRLL) to promote science education through professional development, school programs, internships, and outreach events. Professional development is offered to high school and college science and agriculture educators on the use and applications of zSpace and Oculus platforms. Educators learn to use, edit, or create lesson plans in the zSpace platform that are aligned with the Next Generation Science Standards. They also learn to use virtual reality experiences created by the PAVRLL available in Oculus (e.g. The Soybean Saga). Using a cost-free loan rotation system, educators can bring the AVR units to the classroom and offer AVR activities to their students. Each activity has user guides and activity protocols for both teachers and students. The PAVRLL also offers activities for 3D plant modeling. High school students work in teams of art-, science-, and technology-oriented students to design and create 3D models of plant species that are under research at the Danforth Center and present their projects at scientific events. Those 3D models are open access through the zSpace platform and are used by PAVRLL for professional development and the creation of VR activities. Both teachers and students acquire knowledge of plant and agriculture content and real-world problems, gain skills in AVR technology, 3D modeling, and science communication, and become more aware and interested in plant science. Students that participate in the PAVRLL activities complete pre- and post-surveys and reflection questions that evaluate interests in STEM and STEM careers, students’ perceptions of three design features of biology lab courses (collaboration, discovery/relevance, and iteration/productive failure), plant awareness, and engagement and learning in AVR environments. The PAVRLL was established in the fall of 2019, and since then, it has trained 15 educators, three of which will implement the AVR programs in the fall of 2021. Seven students have worked in the 3D plant modeling activity through a virtual internship. Due to the COVID-19 pandemic, the number of teachers trained, and classroom implementations have been very limited. It is expected that in the fall of 2021, students will come back to the schools in person, and by the spring of 2022, the PAVRLL activities will be fully implemented. This will allow the collection of enough data on student assessments that will provide insights on benefits and best practices for the use of AVR technologies in the classrooms. The PAVRLL uses cutting-edge educational technologies to promote science education and assess their benefits and will continue its expansion. Currently, the PAVRLL is applying for grants to create its own virtual labs where students can experience authentic research experiences using real Danforth research data based on programs the Education Lab already used in classrooms.

Keywords: assessment, augmented reality, education, plant science, virtual reality

Procedia PDF Downloads 148
218 Optimizing Usability Testing with Collaborative Method in an E-Commerce Ecosystem

Authors: Markandeya Kunchi

Abstract:

Usability testing (UT) is one of the vital steps in the User-centred design (UCD) process when designing a product. In an e-commerce ecosystem, UT becomes primary as new products, features, and services are launched very frequently. And, there are losses attached to the company if an unusable and inefficient product is put out to market and is rejected by customers. This paper tries to answer why UT is important in the product life-cycle of an E-commerce ecosystem. Secondary user research was conducted to find out work patterns, development methods, type of stakeholders, and technology constraints, etc. of a typical E-commerce company. Qualitative user interviews were conducted with product managers and designers to find out the structure, project planning, product management method and role of the design team in a mid-level company. The paper tries to address the usual apprehensions of the company to inculcate UT within the team. As well, it stresses upon factors like monetary resources, lack of usability expert, narrow timelines, and lack of understanding of higher management as some primary reasons. Outsourcing UT to vendors is also very prevalent with mid-level e-commerce companies, but it has its own severe repercussions like very little team involvement, huge cost, misinterpretation of the findings, elongated timelines, and lack of empathy towards the customer, etc. The shortfalls of the unavailability of a UT process in place within the team and conducting UT through vendors are bad user experiences for customers while interacting with the product, badly designed products which are neither useful and nor utilitarian. As a result, companies see dipping conversions rates in apps and websites, huge bounce rates and increased uninstall rates. Thus, there was a need for a more lean UT system in place which could solve all these issues for the company. This paper highlights on optimizing the UT process with a collaborative method. The degree of optimization and structure of collaborative method is the highlight of this paper. Collaborative method of UT is one in which the centralised design team of the company takes for conducting and analysing the UT. The UT is usually a formative kind where designers take findings into account and uses in the ideation process. The success of collaborative method of UT is due to its ability to sync with the product management method employed by the company or team. The collaborative methods focus on engaging various teams (design, marketing, product, administration, IT, etc.) each with its own defined roles and responsibility in conducting a smooth UT with users In-house. The paper finally highlights the positive results of collaborative UT method after conducting more than 100 In-lab interviews with users across the different lines of businesses. Some of which are the improvement of interaction between stakeholders and the design team, empathy towards users, improved design iteration, better sanity check of design solutions, optimization of time and money, effective and efficient design solution. The future scope of collaborative UT is to make this method leaner, by reducing the number of days to complete the entire project starting from planning between teams to publishing the UT report.

Keywords: collaborative method, e-commerce, product management method, usability testing

Procedia PDF Downloads 97
217 3D Design of Orthotic Braces and Casts in Medical Applications Using Microsoft Kinect Sensor

Authors: Sanjana S. Mallya, Roshan Arvind Sivakumar

Abstract:

Orthotics is the branch of medicine that deals with the provision and use of artificial casts or braces to alter the biomechanical structure of the limb and provide support for the limb. Custom-made orthoses provide more comfort and can correct issues better than those available over-the-counter. However, they are expensive and require intricate modelling of the limb. Traditional methods of modelling involve creating a plaster of Paris mould of the limb. Lately, CAD/CAM and 3D printing processes have improved the accuracy and reduced the production time. Ordinarily, digital cameras are used to capture the features of the limb from different views to create a 3D model. We propose a system to model the limb using Microsoft Kinect2 sensor. The Kinect can capture RGB and depth frames simultaneously up to 30 fps with sufficient accuracy. The region of interest is captured from three views, each shifted by 90 degrees. The RGB and depth data are fused into a single RGB-D frame. The resolution of the RGB frame is 1920px x 1080px while the resolution of the Depth frame is 512px x 424px. As the resolution of the frames is not equal, RGB pixels are mapped onto the Depth pixels to make sure data is not lost even if the resolution is lower. The resulting RGB-D frames are collected and using the depth coordinates, a three dimensional point cloud is generated for each view of the Kinect sensor. A common reference system was developed to merge the individual point clouds from the Kinect sensors. The reference system consisted of 8 coloured cubes, connected by rods to form a skeleton-cube with the coloured cubes at the corners. For each Kinect, the region of interest is the square formed by the centres of the four cubes facing the Kinect. The point clouds are merged by considering one of the cubes as the origin of a reference system. Depending on the relative distance from each cube, the three dimensional coordinate points from each point cloud is aligned to the reference frame to give a complete point cloud. The RGB data is used to correct for any errors in depth data for the point cloud. A triangular mesh is generated from the point cloud by applying Delaunay triangulation which generates the rough surface of the limb. This technique forms an approximation of the surface of the limb. The mesh is smoothened to obtain a smooth outer layer to give an accurate model of the limb. The model of the limb is used as a base for designing the custom orthotic brace or cast. It is transferred to a CAD/CAM design file to design of the brace above the surface of the limb. The proposed system would be more cost effective than current systems that use MRI or CT scans for generating 3D models and would be quicker than using traditional plaster of Paris cast modelling and the overall setup time is also low. Preliminary results indicate that the accuracy of the Kinect2 is satisfactory to perform modelling.

Keywords: 3d scanning, mesh generation, Microsoft kinect, orthotics, registration

Procedia PDF Downloads 165
216 The Predictive Power of Successful Scientific Theories: An Explanatory Study on Their Substantive Ontologies through Theoretical Change

Authors: Damian Islas

Abstract:

Debates on realism in science concern two different questions: (I) whether the unobservable entities posited by theories can be known; and (II) whether any knowledge we have of them is objective or not. Question (I) arises from the doubt that since observation is the basis of all our factual knowledge, unobservable entities cannot be known. Question (II) arises from the doubt that since scientific representations are inextricably laden with the subjective, idiosyncratic, and a priori features of human cognition and scientific practice, they cannot convey any reliable information on how their objects are in themselves. A way of understanding scientific realism (SR) is through three lines of inquiry: ontological, semantic, and epistemological. Ontologically, scientific realism asserts the existence of a world independent of human mind. Semantically, scientific realism assumes that theoretical claims about reality show truth values and, thus, should be construed literally. Epistemologically, scientific realism believes that theoretical claims offer us knowledge of the world. Nowadays, the literature on scientific realism has proceeded rather far beyond the realism versus antirealism debate. This stance represents a middle-ground position between the two according to which science can attain justified true beliefs concerning relational facts about the unobservable realm but cannot attain justified true beliefs concerning the intrinsic nature of any objects occupying that realm. That is, the structural content of scientific theories about the unobservable can be known, but facts about the intrinsic nature of the entities that figure as place-holders in those structures cannot be known. There are two possible versions of SR: Epistemological Structural Realism (ESR) and Ontic Structural Realism (OSR). On ESR, an agnostic stance is preserved with respect to the natures of unobservable entities, but the possibility of knowing the relations obtaining between those entities is affirmed. OSR includes the rather striking claim that when it comes to the unobservables theorized about within fundamental physics, relations exist, but objects do not. Focusing on ESR, questions arise concerning its ability to explain the empirical success of a theory. Empirical success certainly involves predictive success, and predictive success implies a theory’s power to make accurate predictions. But a theory’s power to make any predictions at all seems to derive precisely from its core axioms or laws concerning unobservable entities and mechanisms, and not simply the sort of structural relations often expressed in equations. The specific challenge to ESR concerns its ability to explain the explanatory and predictive power of successful theories without appealing to their substantive ontologies, which are often not preserved by their successors. The response to this challenge will depend on the various and subtle different versions of ESR and OSR stances, which show a sort of progression through eliminativist OSR to moderate OSR of gradual increase in the ontological status accorded to objects. Knowing the relations between unobserved entities is methodologically identical to assert that these relations between unobserved entities exist.

Keywords: eliminativist ontic structural realism, epistemological structuralism, moderate ontic structural realism, ontic structuralism

Procedia PDF Downloads 95
215 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming

Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero

Abstract:

Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.

Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up

Procedia PDF Downloads 215
214 Short and Long Crack Growth Behavior in Ferrite Bainite Dual Phase Steels

Authors: Ashok Kumar, Shiv Brat Singh, Kalyan Kumar Ray

Abstract:

There is growing awareness to design steels against fatigue damage Ferrite martensite dual-phase steels are known to exhibit favourable mechanical properties like good strength, ductility, toughness, continuous yielding, and high work hardening rate. However, dual-phase steels containing bainite as second phase are potential alternatives for ferrite martensite steels for certain applications where good fatigue property is required. Fatigue properties of dual phase steels are popularly assessed by the nature of variation of crack growth rate (da/dN) with stress intensity factor range (∆K), and the magnitude of fatigue threshold (∆Kth) for long cracks. There exists an increased emphasis to understand not only the long crack fatigue behavior but also short crack growth behavior of ferrite bainite dual phase steels. The major objective of this report is to examine the influence of microstructures on the short and long crack growth behavior of a series of developed dual-phase steels with varying amounts of bainite and. Three low carbon steels containing Nb, Cr and Mo as microalloying elements steels were selected for making ferrite-bainite dual-phase microstructures by suitable heat treatments. The heat treatment consisted of austenitizing the steel at 1100°C for 20 min, cooling at different rates in air prior to soaking these in a salt bath at 500°C for one hour, and finally quenching in water. Tensile tests were carried out on 25 mm gauge length specimens with 5 mm diameter using nominal strain rate 0.6x10⁻³ s⁻¹ at room temperature. Fatigue crack growth studies were made on a recently developed specimen configuration using a rotating bending machine. The crack growth was monitored by interrupting the test and observing the specimens under an optical microscope connected to an Image analyzer. The estimated crack lengths (a) at varying number of cycles (N) in different fatigue experiments were analyzed to obtain log da/dN vs. log °∆K curves for determining ∆Kthsc. The microstructural features of these steels have been characterized and their influence on the near threshold crack growth has been examined. This investigation, in brief, involves (i) the estimation of ∆Kthsc and (ii) the examination of the influence of microstructure on short and long crack fatigue threshold. The maximum fatigue threshold values obtained from short crack growth experiments on various specimens of dual-phase steels containing different amounts of bainite are found to increase with increasing bainite content in all the investigated steels. The variations of fatigue behavior of the selected steel samples have been explained with the consideration of varying amounts of the constituent phases and their interactions with the generated microstructures during cyclic loading. Quantitative estimation of the different types of fatigue crack paths indicates that the propensity of a crack to pass through the interfaces depends on the relative amount of the microstructural constituents. The fatigue crack path is found to be predominantly intra-granular except for the ones containing > 70% bainite in which it is predominantly inter-granular.

Keywords: bainite, dual phase steel, fatigue crack growth rate, long crack fatigue threshold, short crack fatigue threshold

Procedia PDF Downloads 187
213 Internet of Assets: A Blockchain-Inspired Academic Program

Authors: Benjamin Arazi

Abstract:

Blockchain is the technology behind cryptocurrencies like Bitcoin. It revolutionizes the meaning of trust in the sense of offering total reliability without relying on any central entity that controls or supervises the system. The Wall Street Journal states: “Blockchain Marks the Next Step in the Internet’s Evolution”. Blockchain was listed as #1 in Linkedin – The Learning Blog “most in-demand hard skills needed in 2020”. As stated there: “Blockchain’s novel way to store, validate, authorize, and move data across the internet has evolved to securely store and send any digital asset”. GSMA, a leading Telco organization of mobile communications operators, declared that “Blockchain has the potential to be for value what the Internet has been for information”. Motivated by these seminal observations, this paper presents the foundations of a Blockchain-based “Internet of Assets” academic program that joins under one roof leading application areas that are characterized by the transfer of assets over communication lines. Two such areas, which are pillars of our economy, are Fintech – Financial Technology and mobile communications services. The next application in line is Healthcare. These challenges are met based on available extensive professional literature. Blockchain-based assets communication is based on extending the principle of Bitcoin, starting with the basic question: If digital money that travels across the universe can ‘prove its own validity’, can this principle be applied to digital content. A groundbreaking positive answer here led to the concept of “smart contract” and consequently to DLT - Distributed Ledger Technology, where the word ‘distributed’ relates to the non-existence of reliable central entities or trusted third parties. The terms Blockchain and DLT are frequently used interchangeably in various application areas. The World Bank Group compiled comprehensive reports, analyzing the contribution of DLT/Blockchain to Fintech. The European Central Bank and Bank of Japan are engaged in Project Stella, “Balancing confidentiality and auditability in a distributed ledger environment”. 130 DLT/Blockchain focused Fintech startups are now operating in Switzerland. Blockchain impact on mobile communications services is treated in detail by leading organizations. The TM Forum is a global industry association in the telecom industry, with over 850 member companies, mainly mobile operators, that generate US$2 trillion in revenue and serve five billion customers across 180 countries. From their perspective: “Blockchain is considered one of the digital economy’s most disruptive technologies”. Samples of Blockchain contributions to Fintech (taken from a World Bank document): Decentralization and disintermediation; Greater transparency and easier auditability; Automation & programmability; Immutability & verifiability; Gains in speed and efficiency; Cost reductions; Enhanced cyber security resilience. Samples of Blockchain contributions to the Telco industry. Establishing identity verification; Record of transactions for easy cost settlement; Automatic triggering of roaming contract which enables near-instantaneous charging and reduction in roaming fraud; Decentralized roaming agreements; Settling accounts per costs incurred in accordance with agreement tariffs. This clearly demonstrates an academic education structure where fundamental technologies are studied in classes together with these two application areas. Advanced courses, treating specific implementations then follow separately. All are under the roof of “Internet of Assets”.

Keywords: blockchain, education, financial technology, mobile telecommunications services

Procedia PDF Downloads 155
212 Coupled Field Formulation – A Unified Method for Formulating Structural Mechanics Problems

Authors: Ramprasad Srinivasan

Abstract:

Engineers create inventions and put their ideas in concrete terms to design new products. Design drivers must be established, which requires, among other things, a complete understanding of the product design, load paths, etc. For Aerospace Vehicles, weight/strength ratio, strength, stiffness and stability are the important design drivers. A complex built-up structure is made up of an assemblage of primitive structural forms of arbitrary shape, which include 1D structures like beams and frames, 2D structures like membranes, plate and shell structures, and 3D solid structures. Justification through simulation involves a check for all the quantities of interest, namely stresses, deformation, frequencies, and buckling loads and is normally achieved through the finite element (FE) method. Over the past few decades, Fiber-reinforced composites are fast replacing the traditional metallic structures in the weight-sensitive aerospace and aircraft industries due to their high specific strength, high specific stiffness, anisotropic properties, design freedom for tailoring etc. Composite panel constructions are used in aircraft to design primary structure components like wings, empennage, ailerons, etc., while thin-walled composite beams (TWCB) are used to model slender structures like stiffened panels, helicopter, and wind turbine rotor blades, etc. The TWCB demonstrates many non-classical effects like torsional and constrained warping, transverse shear, coupling effects, heterogeneity, etc., which makes the analysis of composite structures far more complex. Conventional FE formulations to model 1D structures suffer from many limitations like shear locking, particularly in slender beams, lower convergence rates due to material coupling in composites, inability to satisfy, equilibrium in the domain and natural boundary conditions (NBC) etc. For 2D structures, the limitations of conventional displacement-based FE formulations include the inability to satisfy NBC explicitly and many pathological problems such as shear and membrane locking, spurious modes, stress oscillations, lower convergence due to mesh distortion etc. This mandates frequent re-meshing to even achieve an acceptable mesh (satisfy stringent quality metrics) for analysis leading to significant cycle time. Besides, currently, there is a need for separate formulations (u/p) to model incompressible materials, and a single unified formulation is missing in the literature. Hence coupled field formulation (CFF) is a unified formulation proposed by the author for the solution of complex 1D and 2D structures addressing the gaps in the literature mentioned above. The salient features of CFF and its many advantages over other conventional methods shall be presented in this paper.

Keywords: coupled field formulation, kinematic and material coupling, natural boundary condition, locking free formulation

Procedia PDF Downloads 49
211 An Agent-Based Approach to Examine Interactions of Firms for Investment Revival

Authors: Ichiro Takahashi

Abstract:

One conundrum that macroeconomic theory faces is to explain how an economy can revive from depression, in which the aggregate demand has fallen substantially below its productive capacity. This paper examines an autonomous stabilizing mechanism using an agent-based Wicksell-Keynes macroeconomic model. This paper focuses on the effects of the number of firms and the length of the gestation period for investment that are often assumed to be one in a mainstream macroeconomic model. The simulations found the virtual economy was highly unstable, or more precisely, collapsing when these parameters are fixed at one. This finding may even suggest us to question the legitimacy of these common assumptions. A perpetual decline in capital stock will eventually encourage investment if the capital stock is short-lived because an inactive investment will result in insufficient productive capacity. However, for an economy characterized by a roundabout production method, a gradual decline in productive capacity may not be able to fall below the aggregate demand that is also shrinking. Naturally, one would then ask if our economy cannot rely on an external stimulus such as population growth and technological progress to revive investment, what factors would provide such a buoyancy for stimulating investments? The current paper attempts to answer this question by employing the artificial macroeconomic model mentioned above. The baseline model has the following three features: (1) the multi-period gestation for investment, (2) a large number of heterogeneous firms, (3) demand-constrained firms. The instability is a consequence of the following dynamic interactions. (a) A multiple-period gestation period means that once a firm starts a new investment, it continues to invest over some subsequent periods. During these gestation periods, the excess demand created by the investing firm will spill over to ignite new investment of other firms that are supplying investment goods: the presence of multi-period gestation for investment provides a field for investment interactions. Conversely, the excess demand for investment goods tends to fade away before it develops into a full-fledged boom if the gestation period of investment is short. (b) A strong demand in the goods market tends to raise the price level, thereby lowering real wages. This reduction of real wages creates two opposing effects on the aggregate demand through the following two channels: (1) a reduction in the real labor income, and (2) an increase in the labor demand due to the principle of equality between the marginal labor productivity and real wage (referred as the Walrasian labor demand). If there is only a single firm, a lower real wage will increase its Walrasian labor demand, thereby an actual labor demand tends to be determined by the derived labor demand. Thus, the second positive effect would not work effectively. In contrast, for an economy with a large number of firms, Walrasian firms will increase employment. This interaction among heterogeneous firms is a key for stability. A single firm cannot expect the benefit of such an increased aggregate demand from other firms.

Keywords: agent-based macroeconomic model, business cycle, demand constraint, gestation period, representative agent model, stability

Procedia PDF Downloads 137
210 Comparison of Incidence and Risk Factors of Early Onset and Late Onset Preeclampsia: A Population Based Cohort Study

Authors: Sadia Munir, Diana White, Aya Albahri, Pratiwi Hastania, Eltahir Mohamed, Mahmood Khan, Fathima Mohamed, Ayat Kadhi, Haila Saleem

Abstract:

Preeclampsia is a major complication of pregnancy. Prediction and management of preeclampsia is a challenge for obstetricians. To our knowledge, no major progress has been achieved in the prevention and early detection of preeclampsia. There is very little known about the clear treatment path of this disorder. Preeclampsia puts both mother and baby at risk of several short term- and long term-health problems later in life. There is huge health service cost burden in the health care system associated with preeclampsia and its complications. Preeclampsia is divided into two different types. Early onset preeclampsia develops before 34 weeks of gestation, and late onset develops at or after 34 weeks of gestation. Different genetic and environmental factors, prognosis, heritability, biochemical and clinical features are associated with early and late onset preeclampsia. Prevalence of preeclampsia greatly varies all over the world and is dependent on ethnicity of the population and geographic region. To authors best knowledge, no published data on preeclampsia exist in Qatar. In this study, we are reporting the incidence of preeclampsia in Qatar. The purpose of this study is to compare the incidence and risk factors of both early onset and late onset preeclampsia in Qatar. This retrospective longitudinal cohort study was conducted using data from the hospital record of Women’s Hospital, Hamad Medical Corporation (HMC), from May 2014-May 2016. Data collection tool, which was approved by HMC, was a researcher made extraction sheet that included information such as blood pressure during admission, socio demographic characteristics, delivery mode, and new born details. A total of 1929 patients’ files were identified by the hospital information management when they apply codes of preeclampsia. Out of 1929 files, 878 had significant gestational hypertension without proteinuria, 365 had preeclampsia, 364 had severe preeclampsia, and 188 had preexisting hypertension with superimposed proteinuria. In this study, 78% of the data was obtained by hospital electronic system (Cerner) and the remaining 22% was from patient’s paper records. We have gone through detail data extraction from 560 files. Initial data analysis has revealed that 15.02% of pregnancies were complicated with preeclampsia from May 2014-May 2016. We have analyzed difference in the two different disease entities in the ethnicity, maternal age, severity of hypertension, mode of delivery and infant birth weight. We have identified promising differences in the risk factors of early onset and late onset preeclampsia. The data from clinical findings of preeclampsia will contribute to increased knowledge about two different disease entities, their etiology, and similarities/differences. The findings of this study can also be used in predicting health challenges, improving health care system, setting up guidelines, and providing the best care for women suffering from preeclampsia.

Keywords: preeclampsia, incidence, risk factors, maternal

Procedia PDF Downloads 116
209 IEEE802.15.4e Based Scheduling Mechanisms and Systems for Industrial Internet of Things

Authors: Ho-Ting Wu, Kai-Wei Ke, Bo-Yu Huang, Liang-Lin Yan, Chun-Ting Lin

Abstract:

With the advances in advanced technology, wireless sensor network (WSN) has become one of the most promising candidates to implement the wireless industrial internet of things (IIOT) architecture. However, the legacy IEEE 802.15.4 based WSN technology such as Zigbee system cannot meet the stringent QoS requirement of low powered, real-time, and highly reliable transmission imposed by the IIOT environment. Recently, the IEEE society developed IEEE 802.15.4e Time Slotted Channel Hopping (TSCH) access mode to serve this purpose. Furthermore, the IETF 6TiSCH working group has proposed standards to integrate IEEE 802.15.4e with IPv6 protocol smoothly to form a complete protocol stack for IIOT. In this work, we develop key network technologies for IEEE 802.15.4e based wireless IIoT architecture, focusing on practical design and system implementation. We realize the OpenWSN-based wireless IIOT system. The system architecture is divided into three main parts: web server, network manager, and sensor nodes. The web server provides user interface, allowing the user to view the status of sensor nodes and instruct sensor nodes to follow commands via user-friendly browser. The network manager is responsible for the establishment, maintenance, and management of scheduling and topology information. It executes centralized scheduling algorithm, sends the scheduling table to each node, as well as manages the sensing tasks of each device. Sensor nodes complete the assigned tasks and sends the sensed data. Furthermore, to prevent scheduling error due to packet loss, a schedule inspection mechanism is implemented to verify the correctness of the schedule table. In addition, when network topology changes, the system will act to generate a new schedule table based on the changed topology for ensuring the proper operation of the system. To enhance the system performance of such system, we further propose dynamic bandwidth allocation and distributed scheduling mechanisms. The developed distributed scheduling mechanism enables each individual sensor node to build, maintain and manage the dedicated link bandwidth with its parent and children nodes based on locally observed information by exchanging the Add/Delete commands via two processes. The first process, termed as the schedule initialization process, allows each sensor node pair to identify the available idle slots to allocate the basic dedicated transmission bandwidth. The second process, termed as the schedule adjustment process, enables each sensor node pair to adjust their allocated bandwidth dynamically according to the measured traffic loading. Such technology can sufficiently satisfy the dynamic bandwidth requirement in the frequently changing environments. Last but not least, we propose a packet retransmission scheme to enhance the system performance of the centralized scheduling algorithm when the packet delivery rate (PDR) is low. We propose a multi-frame retransmission mechanism to allow every single network node to resend each packet for at least the predefined number of times. The multi frame architecture is built according to the number of layers of the network topology. Performance results via simulation reveal that such retransmission scheme is able to provide sufficient high transmission reliability while maintaining low packet transmission latency. Therefore, the QoS requirement of IIoT can be achieved.

Keywords: IEEE 802.15.4e, industrial internet of things (IIOT), scheduling mechanisms, wireless sensor networks (WSN)

Procedia PDF Downloads 134
208 Loss Quantification Archaeological Sites in Watershed Due to the Use and Occupation of Land

Authors: Elissandro Voigt Beier, Cristiano Poleto

Abstract:

The main objective of the research is to assess the loss through the quantification of material culture (archaeological fragments) in rural areas, sites explored economically by machining on seasonal crops, and also permanent, in a hydrographic subsystem Camaquã River in the state of Rio Grande do Sul, Brazil. The study area consists of different micro basins and differs in area, ranging between 1,000 m² and 10,000 m², respectively the largest and the smallest, all with a large number of occurrences and outcrop locations of archaeological material and high density in intense farm environment. In the first stage of the research aimed to identify the dispersion of points of archaeological material through field survey through plot points by the Global Positioning System (GPS), within each river basin, was made use of concise bibliography on the topic in the region, helping theoretically in understanding the old landscaping with preferences of occupation for reasons of ancient historical people through the settlements relating to the practice observed in the field. The mapping was followed by the cartographic development in the region through the development of cartographic products of the land elevation, consequently were created cartographic products were to contribute to the understanding of the distribution of the absolute materials; the definition and scope of the material dispersed; and as a result of human activities the development of revolving letter by mechanization of in situ material, it was also necessary for the preparation of materials found density maps, linking natural environments conducive to ancient historical occupation with the current human occupation. The third stage of the project it is for the systematic collection of archaeological material without alteration or interference in the subsurface of the indigenous settlements, thus, the material was prepared and treated in the laboratory to remove soil excesses, cleaning through previous communication methodology, measurement and quantification. Approximately 15,000 were identified archaeological fragments belonging to different periods of ancient history of the region, all collected outside of its environmental and historical context and it also has quite changed and modified. The material was identified and cataloged considering features such as object weight, size, type of material (lithic, ceramic, bone, Historical porcelain and their true association with the ancient history) and it was disregarded its principles as individual lithology of the object and functionality same. As observed preliminary results, we can point out the change of materials by heavy mechanization and consequent soil disturbance processes, and these processes generate loading of archaeological materials. Therefore, as a next step will be sought, an estimate of potential losses through a mathematical model. It is expected by this process, to reach a reliable model of high accuracy which can be applied to an archeological site of lower density without encountering a significant error.

Keywords: degradation of heritage, quantification in archaeology, watershed, use and occupation of land

Procedia PDF Downloads 247
207 Fabrication of High Energy Hybrid Capacitors from Biomass Waste-Derived Activated Carbon

Authors: Makhan Maharjan, Mani Ulaganathan, Vanchiappan Aravindan, Srinivasan Madhavi, Jing-Yuan Wang, Tuti Mariana Lim

Abstract:

There is great interest to exploit sustainable, low-cost, renewable resources as carbon precursors for energy storage applications. Research on development of energy storage devices has been growing rapidly due to mismatch in power supply and demand from renewable energy sources This paper reported the synthesis of porous activated carbon from biomass waste and evaluated its performance in supercapicators. In this work, we employed orange peel (waste material) as the starting material and synthesized activated carbon by pyrolysis of KOH impregnated orange peel char at 800 °C in argon atmosphere. The resultant orange peel-derived activated carbon (OP-AC) exhibited a high BET surface area of 1,901 m2 g-1, which is the highest surface area so far reported for the orange peel. The pore size distribution (PSD) curve exhibits the pores centered at 11.26 Å pore width, suggesting dominant microporosity. The OP-AC was studied as positive electrode in combination with different negative electrode materials, such as pre-lithiated graphite (LiC6) and Li4Ti5O12 for making different hybrid capacitors. The lithium ion capacitor (LIC) fabricated using OP-AC with pre-lithiated graphite delivered a high energy density of ~106 Wh kg–1. The energy density for OP-AC||Li4Ti5O12 capacitor was ~35 Wh kg–1. For comparison purpose, configuration of OP-AC||OP-AC capacitors were studied in both aqueous (1M H2SO4) and organic (1M LiPF6 in EC-DMC) electrolytes, which delivered the energy density of 6.6 Wh kg-1 and 16.3 Wh kg-1, respectively. The cycling retentions obtained at current density of 1 A g–1 were ~85.8, ~87.0 ~82.2 and ~58.8% after 2500 cycles for OP-AC||OP-AC (aqueous), OP-AC||OP-AC (organic), OP-AC||Li4Ti5O12 and OP-AC||LiC6 configurations, respectively. In addition, characterization studies were performed by elemental and proximate composition, thermogravimetry, field emission-scanning electron microscopy, Raman spectra, X-ray diffraction (XRD) pattern, Fourier transform-infrared, X-ray photoelectron spectroscopy (XPS) and N2 sorption isotherms. The morphological features from FE-SEM exhibited well-developed porous structures. Two typical broad peaks observed in the XRD framework of the synthesized carbon implies amorphous graphitic structure. The ratio of 0.86 for ID/IG in Raman spectra infers high degree of graphitization in the sample. The band spectra of C 1s in XPS display the well resolved peaks related to carbon atoms in various chemical environments; for instances, the characteristics binding energies appeared at ~283.83, ~284.83, ~286.13, ~288.56, and ~290.70 eV which correspond to sp2 -graphitic C, sp3 -graphitic C, C-O, C=O and π-π*, respectively. Characterization studies revealed the synthesized carbon to be promising electrode material towards the application for energy storage devices. The findings opened up the possibility of developing high energy LICs from abundant, low-cost, renewable biomass waste.

Keywords: lithium-ion capacitors, orange peel, pre-lithiated graphite, supercapacitors

Procedia PDF Downloads 207
206 Nutritional Genomics Profile Based Personalized Sport Nutrition

Authors: Eszter Repasi, Akos Koller

Abstract:

Our genetic information determines our look, physiology, sports performance and all our features. Maximizing the performances of athletes have adopted a science-based approach to the nutritional support. Nowadays genetics studies have blended with nutritional sciences, and a dynamically evolving, new research field have appeared. Nutritional genomics is needed to be used by nutritional experts. This is a recent field of nutritional science, which can provide a solution to reach the best sport performance using correlations between the athlete’s genome, nutritions, molecules, included human microbiome (links between food, microbiome and epigenetics), nutrigenomics and nutrigenetics. Nutritional genomics has a tremendous potential to change the future of dietary guidelines and personal recommendations. Experts need to use new technology to get information about the athletes, like nutritional genomics profile (included the determination of the oral and gut microbiome and DNA coded reaction for food components), which can modify the preparation term and sports performance. The influence of nutrients on the genes expression is called Nutrigenomics. The heterogeneous response of gene variants to nutrients, dietary components is called Nutrigenetics. The human microbiome plays a critical role in the state of health and well-being, and there are more links between food or nutrition and the human microbiome composition, which can develop diseases and epigenetic changes as well. A nutritional genomics-based profile of athletes can be the best technic for a dietitian to make a unique sports nutrition diet plan. Using functional food and the right food components can be effected on health state, thus sports performance. Scientists need to determine the best response, due to the effect of nutrients on health, through altering genome promote metabolites and result changes in physiology. Nutritional biochemistry explains why polymorphisms in genes for the absorption, circulation, or metabolism of essential nutrients (such as n-3 polyunsaturated fatty acids or epigallocatechin-3-gallate), would affect the efficacy of that nutrient. Controlled nutritional deficiencies and failures, prevented the change of health state or a newly discovered food intolerance are observed by a proper medical team, can support better sports performance. It is important that the dietetics profession informed on gene-diet interactions, that may be leading to optimal health, reduced risk of injury or disease. A special medical application for documentation and monitoring of data of health state and risk factors can uphold and warn the medical team for an early action and help to be able to do a proper health service in time. This model can set up a personalized nutrition advice from the status control, through the recovery, to the monitoring. But more studies are needed to understand the mechanisms and to be able to change the composition of the microbiome, environmental and genetic risk factors in cases of athletes.

Keywords: gene-diet interaction, multidisciplinary team, microbiome, diet plan

Procedia PDF Downloads 148
205 Accessible Facilities in Home Environment for Elderly Family Members in Sri Lanka

Authors: M. A. N. Rasanjalee Perera

Abstract:

The world is facing several problems due to increasing elderly population. In Sri Lanka, along with the complexity of the modern society and structural and functional changes of the family, “caring for elders” seems as an emerging social problem. This situation may intensify as the county is moving into a middle income society. Seeking higher education and related career opportunities, and urban living in modern housing are new trends, through which several problems are generated. Among many issues related with elders, “lack of accessible and appropriate facilities in their houses as well as public buildings” can be identified as a major problem. This study argues that welfare facilities provided for the elderly people, particularly in the home environment, in the country are not adequate. Modern housing features such as bathrooms, pantries, lobbies, and leisure areas etc. are questionable as to whether they match with elders’ physical and mental needs. Consequently, elders have to face domestic accidents and many other difficulties within their living environments. Records of hospitals in the country also proved this fact. Therefore, this study tries to identify how far modern houses are suited with elders’ needs. The study further questioned whether “aging” is a considerable matter when people are buying, planning and renovating houses. A randomly selected sample of 50 houses were observed and 50 persons were interviewed around the Maharagama urban area in Colombo district to obtain primary data, while relevant secondary data and information were used to have a depth analysis. The study clearly found that none of the houses included to the sample are considering elders’ needs in planning, renovating, or arranging the home. Instead, most of the families were giving priority to the rich and elegant appearance and modern facilities of the houses. Particularly, to the bathrooms, pantry, large setting areas, balcony, parking slots for two vehicles, ad parapet walls with roller-gates are the main concerns. A significant factor found here is that even though, many children of the aged are in middle age and reaching their older years at present, they do not plan their future living within a safe and comfortable home, despite that they are hoping to spent the latter part of their lives in the their current homes. This fact highlights that not only the other responsible parts of the society, but also those who are reaching their older ages are ignoring the problems of the aged. At the same time, it was found that more than 80% of old parents do not like to stay at their children’s homes as the living environments in such modern homes are not familiar or convenient for them. Due to this context, the aged in Sri Lanka may have to be alone in their own homes due to current trend of society of migrating to urban living in modern houses. At the same time, current urban families who live in modern houses may have to face adding accessible facilities in their home environment, as current modern housing facilities may not be appropriate them for a better life in their latter part of life.

Keywords: aging population, elderly care, home environment, housing facilities

Procedia PDF Downloads 105
204 Row Detection and Graph-Based Localization in Tree Nurseries Using a 3D LiDAR

Authors: Ionut Vintu, Stefan Laible, Ruth Schulz

Abstract:

Agricultural robotics has been developing steadily over recent years, with the goal of reducing and even eliminating pesticides used in crops and to increase productivity by taking over human labor. The majority of crops are arranged in rows. The first step towards autonomous robots, capable of driving in fields and performing crop-handling tasks, is for robots to robustly detect the rows of plants. Recent work done towards autonomous driving between plant rows offers big robotic platforms equipped with various expensive sensors as a solution to this problem. These platforms need to be driven over the rows of plants. This approach lacks flexibility and scalability when it comes to the height of plants or distance between rows. This paper proposes instead an algorithm that makes use of cheaper sensors and has a higher variability. The main application is in tree nurseries. Here, plant height can range from a few centimeters to a few meters. Moreover, trees are often removed, leading to gaps within the plant rows. The core idea is to combine row detection algorithms with graph-based localization methods as they are used in SLAM. Nodes in the graph represent the estimated pose of the robot, and the edges embed constraints between these poses or between the robot and certain landmarks. This setup aims to improve individual plant detection and deal with exception handling, like row gaps, which are falsely detected as an end of rows. Four methods were developed for detecting row structures in the fields, all using a point cloud acquired with a 3D LiDAR as an input. Comparing the field coverage and number of damaged plants, the method that uses a local map around the robot proved to perform the best, with 68% covered rows and 25% damaged plants. This method is further used and combined with a graph-based localization algorithm, which uses the local map features to estimate the robot’s position inside the greater field. Testing the upgraded algorithm in a variety of simulated fields shows that the additional information obtained from localization provides a boost in performance over methods that rely purely on perception to navigate. The final algorithm achieved a row coverage of 80% and an accuracy of 27% damaged plants. Future work would focus on achieving a perfect score of 100% covered rows and 0% damaged plants. The main challenges that the algorithm needs to overcome are fields where the height of the plants is too small for the plants to be detected and fields where it is hard to distinguish between individual plants when they are overlapping. The method was also tested on a real robot in a small field with artificial plants. The tests were performed using a small robot platform equipped with wheel encoders, an IMU and an FX10 3D LiDAR. Over ten runs, the system achieved 100% coverage and 0% damaged plants. The framework built within the scope of this work can be further used to integrate data from additional sensors, with the goal of achieving even better results.

Keywords: 3D LiDAR, agricultural robots, graph-based localization, row detection

Procedia PDF Downloads 106
203 A Multipurpose Inertial Electrostatic Magnetic Confinement Fusion for Medical Isotopes Production

Authors: Yasser R. Shaban

Abstract:

A practical multipurpose device for medical isotopes production is most wanted for clinical centers and researches. Unfortunately, the major supply of these radioisotopes currently comes from aging sources, and there is a great deal of uneasiness in the domestic market. There are also many cases where the cost of certain radioisotopes is too high for their introduction on a commercial scale even though the isotopes might have great benefits for society. The medical isotopes such as radiotracers PET (Positron Emission Tomography), Technetium-99 m, and Iodine-131, Lutetium-177 by is feasible to be generated by a single unit named IEMC (Inertial Electrostatic Magnetic Confinement). The IEMC fusion vessel is the upgrading unit of the Inertial Electrostatic Confinement IEC fusion vessel. Comprehensive experimental works on IEC were carried earlier with promising results. The principle of inertial electrostatic magnetic confinement IEMC fusion is based on forcing the binary fuel ions to interact in the opposite directions in ions cyclotrons orbits with different kinetic energies in order to have equal compression (forces) and with different ion cyclotron frequency ω in order to increase the rate of intersection. The IEMC features greater fusion volume than IEC by several orders of magnitude. The particles rate from the IEMC approach are projected to be 8.5 x 10¹¹ (p/s), ~ 0.2 microampere proton, for D/He-3 fusion reaction and 4.2 x 10¹² (n/s) for D/T fusion reaction. The projected values of particles yield (neutrons and protons) are suitable for medical isotope productions on-site by a single unit without any change in the fusion vessel but only the fuel gas. The PET radiotracers are usually produced on-site by medical ion accelerator whereas Technetium-99m (Tc-99m) is usually produced off-site from the irradiation facilities of nuclear power plants. Typically, hospitals receive molybdenum-99 isotope container; the isotope decays to Tc-99mwith half-life time 2.75 days. Even though the projected current from IEMC is lesser than the proton current from the medical ion accelerator but still the IEMC vessel is simpler, and reduced in components and power consumption which add a new value of populating the PET radiotracers in most clinical centers. On the other hand, the projected neutrons flux from the IEMC is lesser than the thermal neutron flux at the irradiation facilities of nuclear power plants, but in the IEMC case the productions of Technetium-99m is suggested to be at the resonance region of which the resonance integral cross section is two orders of magnitude higher than the thermal flux. Thus it can be said the net activity from both is evened. Besides, the particle accelerator cannot be considered a multipurpose particles production unless a significant change is made to the accelerator to change from neutrons mode to protons mode or vice versa. In conclusion, the projected fusion yield from IEMC is a straightforward since slightly change in the primer IEC and ion source is required.

Keywords: electrostatic versus magnetic confinement fusion vessel, ion source, medical isotopes productions, neutron activation

Procedia PDF Downloads 326
202 The Publishing Process and Results of the Chinese Annotated Edition of John Dewey’s “Experience and Education: The 60th Anniversary Edition”

Authors: Wen-jing Shan

Abstract:

The Chinese annotated edition of “Experience and education: The 60th anniversary edition,” originally written in English by John Dewey (1859-1952), was published in 2015 by this author. A report of the process and results of the translation and annotation of the book is the purpose of this paper. It is worth mentioning that the original 1938 edition was considered as the best concise statement on education by John Dewey, one the most important educational theorists of the twentieth century. One of the features of this The 60th anniversary edition is that the original publisher, Kappa Delta Pi International Honor Society, invited four contemporary Deweyan scholars who had been awarded the Society’s Laureate Scholar to write a review of the book published by Dewey, who was the first to receive this honor. The four scholars are Maxine Greene(1917-2014), Philip W. Jackson(1928-2015), Linda Darling-Hammond(1951-), and O. L. Davis, Jr.(1928-). The original 1938 edition, the best concise statement on education by the most important educational theorist of the twentieth century, was translated into Chinese for five times after its publication in the U.S.A, three in the 1940s, one in the 1990s, and one in 2010s. Nonetheless, the five translations have few or no annotations and have some flaws of mis-interpretations and lack of information. The author retranslated and annotated the book to make the interpretations more faithful, expressive, and elegant, and providing the readers with more understanding and more correct information. This author started the project of translation and annotation sponsored by Taiwan Ministry of Science and Technology in August 2011 and finished and published by July 2015. The work, the author, did was divided into three stages. First, in the preparatory stage of the project, the summary of each chapter, the rationale of the book, the textual commentary, the development of the original and Chinese editions, and reviews and criticisms, as well as Dewey’s biography and bibliography were initially investigated. Secondly, on the basis of the above preliminary work, the translation with annotation of Experience and Education, an epitome of Dewey’s biography and bibliography, a chronology, and a critical introduction for the Experience and Education were written. In the critical introduction, Dewey’s philosophy of experience and educational ideas will be examined along the timeline of human thought. And the vast literature about Dewey and his work will be instrumental to reveal the historical significance of Experience and Education on the modern age and make the critical introduction more knowledgeable. Third, the final stage took another two years to review and revise the draft of the work and send it for publication. There are two parts in the book. The first part is a scholarly introduction including Dewey’s chronicle (in short form), Dewey’s mind, people and life, the importance of “Experience and education”, the necessity of re-translation and re-annotation of “Experience and education” into Chinese. The second part is the re-translation and re-annotation version, including Dewey’s “Experience and education” and four papers written by contemporary scholars.

Keywords: John Dewey, experience and education: the 60th anniversary edition, translation, annotation

Procedia PDF Downloads 128
201 Real-Time Neuroimaging for Rehabilitation of Stroke Patients

Authors: Gerhard Gritsch, Ana Skupch, Manfred Hartmann, Wolfgang Frühwirt, Hannes Perko, Dieter Grossegger, Tilmann Kluge

Abstract:

Rehabilitation of stroke patients is dominated by classical physiotherapy. Nowadays, a field of research is the application of neurofeedback techniques in order to help stroke patients to get rid of their motor impairments. Especially, if a certain limb is completely paralyzed, neurofeedback is often the last option to cure the patient. Certain exercises, like the imagination of the impaired motor function, have to be performed to stimulate the neuroplasticity of the brain, such that in the neighboring parts of the injured cortex the corresponding activity takes place. During the exercises, it is very important to keep the motivation of the patient at a high level. For this reason, the missing natural feedback due to a movement of the effected limb may be replaced by a synthetic feedback based on the motor-related brain function. To generate such a synthetic feedback a system is needed which measures, detects, localizes and visualizes the motor related µ-rhythm. Fast therapeutic success can only be achieved if the feedback features high specificity, comes in real-time and without large delay. We describe such an approach that offers a 3D visualization of µ-rhythms in real time with a delay of 500ms. This is accomplished by combining smart EEG preprocessing in the frequency domain with source localization techniques. The algorithm first selects the EEG channel featuring the most prominent rhythm in the alpha frequency band from a so-called motor channel set (C4, CZ, C3; CP6, CP4, CP2, CP1, CP3, CP5). If the amplitude in the alpha frequency band of this certain electrode exceeds a threshold, a µ-rhythm is detected. To prevent detection of a mixture of posterior alpha activity and µ-activity, the amplitudes in the alpha band outside the motor channel set are not allowed to be in the same range as the main channel. The EEG signal of the main channel is used as template for calculating the spatial distribution of the µ - rhythm over all electrodes. This spatial distribution is the input for a inverse method which provides the 3D distribution of the µ - activity within the brain which is visualized in 3D as color coded activity map. This approach mitigates the influence of lid artifacts on the localization performance. The first results of several healthy subjects show that the system is capable of detecting and localizing the rarely appearing µ-rhythm. In most cases the results match with findings from visual EEG analysis. Frequent eye-lid artifacts have no influence on the system performance. Furthermore, the system will be able to run in real-time. Due to the design of the frequency transformation the processing delay is 500ms. First results are promising and we plan to extend the test data set to further evaluate the performance of the system. The relevance of the system with respect to the therapy of stroke patients has to be shown in studies with real patients after CE certification of the system. This work was performed within the project ‘LiveSolo’ funded by the Austrian Research Promotion Agency (FFG) (project number: 853263).

Keywords: real-time EEG neuroimaging, neurofeedback, stroke, EEG–signal processing, rehabilitation

Procedia PDF Downloads 362
200 Examining the Effects of Ticket Bundling Strategies and Team Identification on Purchase of Hedonic and Utilitarian Options

Authors: Young Ik Suh, Tywan G. Martin

Abstract:

Bundling strategy is a common marketing practice today. In the past decades, both academicians and practitioners have increasingly emphasized the strategic importance of bundling in today’s markets. The reason for increased interest in bundling strategy is that they normally believe that it can significantly increase profits on organization’s sales over time and it is convenient for the customer. However, little efforts has been made on ticket bundling and purchase considerations in hedonic and utilitarian options in sport consumer behavior context. Consumers often face choices between utilitarian and hedonic alternatives in decision making. When consumers purchase certain products, they are only interested in the functional dimensions, which are called utilitarian dimensions. On the other hand, others focus more on hedonic features such as fun, excitement, and pleasure. Thus, the current research examines how utilitarian and hedonic consumption can vary in typical ticket purchasing process. The purpose of this research is to understand the following two research themes: (1) the differential effect of discount framing on ticket bundling: utilitarian and hedonic options and (2) moderating effect of team identification on ticket bundling. In order to test the research hypotheses, an experimental study using a two-way ANOVA, 3 (team identification: low, medium, and high) X 2 (discount frame: ticket bundle sales with utilitarian product, and hedonic product), with mixed factorial design will be conducted to determine whether there is a statistical significance between purchasing intentions of two discount frames of ticket bundle sales within different team identification levels. To compare mean differences among the two different settings, we will create two conditions of ticket bundles: (1) offering a discount on a ticket ($5 off) if they would purchase it along with utilitarian product (e.g., iPhone8 case, t-shirt, cap), and (2) offering a discount on a ticket ($5 off) if they would purchase it along with hedonic product (e.g., pizza, drink, fans featured on big screen). The findings of the current ticket bundling study are expected to have many theoretical and practical contributions and implications by extending the research and literature pertaining to the relationship between team identification and sport consumer behavior. Specifically, this study can provide a reliable and valid framework to understanding the role of team identification as a moderator on behavioral intentions such as purchase intentions. From an academic perspective, the study will be the first known attempt to understand consumer reactions toward different discount frames related to ticket bundling. Even though the game ticket itself is the major commodity of sport event attendance and significantly related to teams’ revenue streams, most recent ticket pricing research has been done in terms of economic or cost-oriented pricing and not from a consumer psychological perspective. For sport practitioners, this study will also provide significant implications. The result will imply that sport marketers may need to develop two different ticketing promotions for loyal fan and non-loyal fans. Since loyal fans concern ticket price than tie-in products when they see ticket bundle sales, advertising campaign should be more focused on discounting ticket price.

Keywords: ticket bundling, hedonic, utilitarian, team identification

Procedia PDF Downloads 138