Search results for: mobile grid computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3500

Search results for: mobile grid computing

350 Digital And Technological Transformation of Cappadocia Valleys: Kizilçukur, Meskendi̇r, Güllüdere 1, Güllüdere 2

Authors: Şenay Güngör, Emre Elbaşi, Beyda Sadikğlu, Utku Eren Bağci, Ömer Uzunel

Abstract:

One of the first places that comes to mind when it comes to tourism in Turkey is the Cappadocia Region. Due to its rich geological and geomorphological heritage, Cappadocia is one of the most visited destinations in the world. In fact, in the first half of 2023, the number of international tourists visiting Cappadocia exceeded 2 million. Considering that the economy of the Cappadocia region is largely based on tourism, it is understood that the quality and technology integration levels of the touristic services offered in the region are of great importance. In this context; as a result of the observations made in Kızılçukur, Meskendir, Güllüdere 1 and Güllüdere 2 valleys, where the important hiking routes of the Cappadocia Region are located, it has been observed that the digital level of the routes is insufficient. It has been observed that the telephone networks in the area are very low or have completely lost their signal strength. In addition, it was determined that the materials such as maps and brochures used by tourism agencies to introduce the valleys are simple and incomplete. It is thought that this situation negatively affects the tourists' orientation and touristic experience in the field. Eliminating these deficiencies identified in the field, improving the digital level of the above-mentioned hiking routes and increasing the added value in destinations are among the main objectives of our study. Within the scope of the study, a mobile application that can work both online and offline on hiking routes has been prepared. 3D modeling of Kızılçukur, Meskendir, Güllüdere 1 and Güllüdere 2 valleys were made using Geographical Information Systems (GIS). In addition, a website has been created to enable tourists to easily access all the above-mentioned information, visuals and technological applications related to the routes. As it is known, the effective use of information and communication technologies in touristic regions not only increases the satisfaction levels of tourists, but also positively affects the attraction of qualified tourists to the region. When the tangible and intangible outputs of this study are evaluated, it is thought that it will serve the social and economic development of the region and set an example for the digital transformation of other routes in the region.

Keywords: nevşehir, cappadocia, cappadocia valleys, tourism route

Procedia PDF Downloads 56
349 CO₂ Recovery from Biogas and Successful Upgrading to Food-Grade Quality: A Case Study

Authors: Elisa Esposito, Johannes C. Jansen, Loredana Dellamuzia, Ugo Moretti, Lidietta Giorno

Abstract:

The reduction of CO₂ emission into the atmosphere as a result of human activity is one of the most important environmental challenges to face in the next decennia. Emission of CO₂, related to the use of fossil fuels, is believed to be one of the main causes of global warming and climate change. In this scenario, the production of biomethane from organic waste, as a renewable energy source, is one of the most promising strategies to reduce fossil fuel consumption and greenhouse gas emission. Unfortunately, biogas upgrading still produces the greenhouse gas CO₂ as a waste product. Therefore, this work presents a case study on biogas upgrading, aimed at the simultaneous purification of methane and CO₂ via different steps, including CO₂/methane separation by polymeric membranes. The original objective of the project was the biogas upgrading to distribution grid quality methane, but the innovative aspect of this case study is the further purification of the captured CO₂, transforming it from a useless by-product to a pure gas with food-grade quality, suitable for commercial application in the food and beverage industry. The study was performed on a pilot plant constructed by Tecno Project Industriale Srl (TPI) Italy. This is a model of one of the largest biogas production and purification plants. The full-scale anaerobic digestion plant (Montello Spa, North Italy), has a digestive capacity of 400.000 ton of biomass/year and can treat 6.250 m3/hour of biogas from FORSU (organic fraction of solid urban waste). The entire upgrading process consists of a number of purifications steps: 1. Dehydration of the raw biogas by condensation. 2. Removal of trace impurities such as H₂S via absorption. 3.Separation of CO₂ and methane via a membrane separation process. 4. Removal of trace impurities from CO₂. The gas separation with polymeric membranes guarantees complete simultaneous removal of microorganisms. The chemical purity of the different process streams was analysed by a certified laboratory and was compared with the guidelines of the European Industrial Gases Association and the International Society of Beverage Technologists (EIGA/ISBT) for CO₂ used in the food industry. The microbiological purity was compared with the limit values defined in the European Collaborative Action. With a purity of 96-99 vol%, the purified methane respects the legal requirements for the household network. At the same time, the CO₂ reaches a purity of > 98.1% before, and 99.9% after the final distillation process. According to the EIGA/ISBT guidelines, the CO₂ proves to be chemically and microbiologically sufficiently pure to be suitable for food-grade applications.

Keywords: biogas, CO₂ separation, CO2 utilization, CO₂ food grade

Procedia PDF Downloads 212
348 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System Under Uncertainty

Authors: Ben Khayut, Lina Fabri, Maya Avikhana

Abstract:

The models of the modern Artificial Narrow Intelligence (ANI) cannot: a) independently and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, cognize, infer, and more in state of Uncertainty, and changes in situations, and environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU) using a neural network as its computational memory, operating under uncertainty, and activating its functions by perception, identification of real objects, fuzzy situational control, forming images of these objects, modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, and images, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, Wisdom, analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge in the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of Situational Control, Fuzzy Logic, Psycholinguistics, Informatics, and modern possibilities of Data Science were applied. The proposed self-controlled System of Brain and Mind is oriented on use as a plug-in in multilingual subject Applications.

Keywords: computational brain, mind, psycholinguistic, system, under uncertainty

Procedia PDF Downloads 177
347 Transforming Data Science Curriculum Through Design Thinking

Authors: Samar Swaid

Abstract:

Today, corporates are moving toward the adoption of Design-Thinking techniques to develop products and services, putting their consumer as the heart of the development process. One of the leading companies in Design-Thinking, IDEO (Innovation, Design, Engineering Organization), defines Design-Thinking as an approach to problem-solving that relies on a set of multi-layered skills, processes, and mindsets that help people generate novel solutions to problems. Design thinking may result in new ideas, narratives, objects or systems. It is about redesigning systems, organizations, infrastructures, processes, and solutions in an innovative fashion based on the users' feedback. Tim Brown, president and CEO of IDEO, sees design thinking as a human-centered approach that draws from the designer's toolkit to integrate people's needs, innovative technologies, and business requirements. The application of design thinking has been witnessed to be the road to developing innovative applications, interactive systems, scientific software, healthcare application, and even to utilizing Design-Thinking to re-think business operations, as in the case of Airbnb. Recently, there has been a movement to apply design thinking to machine learning and artificial intelligence to ensure creating the "wow" effect on consumers. The Association of Computing Machinery task force on Data Science program states that" Data scientists should be able to implement and understand algorithms for data collection and analysis. They should understand the time and space considerations of algorithms. They should follow good design principles developing software, understanding the importance of those principles for testability and maintainability" However, this definition hides the user behind the machine who works on data preparation, algorithm selection and model interpretation. Thus, the Data Science program includes design thinking to ensure meeting the user demands, generating more usable machine learning tools, and developing ways of framing computational thinking. Here, describe the fundamentals of Design-Thinking and teaching modules for data science programs.

Keywords: data science, design thinking, AI, currculum, transformation

Procedia PDF Downloads 81
346 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms

Authors: Seulki Lee, Seoung Bum Kim

Abstract:

Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.

Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process

Procedia PDF Downloads 299
345 A Measurement and Motor Control System for Free Throw Shots in Basketball Using Gyroscope Sensor

Authors: Niloofar Zebarjad

Abstract:

This research aims at finding a tool to provide basketball players with real-time audio feedback on their shooting form in free throw shots. Free throws played a pivotal role in taking the lead in fierce competitions. The major problem in performing an accurate free throw seems to be improper training. Since the arm movement during the free throw shot is complex, the coach or the athlete might miss the movement details during practice. Hence, there is a necessity to create a system that measures arm movements' critical characteristics and control for improper kinematics. The proposed setup in this study quantifies arm kinematics and provides real-time feedback as an audio signal consisting of a gyroscope sensor. Spatial shoulder angle data are transmitted in a mobile application in real-time and can be saved and processed for statistical and analysis purposes. The proposed system is easy to use, inexpensive, portable, and real-time applicable. Objectives: This research aims to modify and control the free throw using audio feedback and determine if and to what extent the new setup reduces errors in arm formations during throws and finally assesses the successful throw rate. Methods: One group of elite basketball athletes and two novice athletes (control and study group) participated in this study. Each group contains 5 participants being studied in three separate sessions over a week. Results: Empirical results showed enhancements in the free throw shooting style, shot pocket (SP), and locked position (LP). The mean values of shoulder angle were controlled on 25° and 45° for SP and LP, respectively, recommended by valid FIBA references. Conclusion: Throughout the experiments, the system helped correct and control the shoulder angles toward the targeted pattern of shot pocket (SP) and locked position (LP). According to the desired results for arm motion, adding another sensor to measure and control the elbow angle is recommended.

Keywords: audio-feedback, basketball, free-throw, locked-position, motor-control, shot-pocket

Procedia PDF Downloads 294
344 Lineament Analysis as a Method of Mineral Deposit Exploration

Authors: Dmitry Kukushkin

Abstract:

Lineaments form complex grids on Earth's surface. Currently, one particular object of study for many researchers is the analysis and geological interpretation of maps of lineament density in an attempt to locate various geological structures. But lineament grids are made up of global, regional and local components, and this superimposition of lineament grids of various scales (global, regional, and local) renders this method less effective. Besides, the erosion processes and the erosional resistance of rocks lying on the surface play a significant role in the formation of lineament grids. Therefore, specific lineament density map is characterized by poor contrast (most anomalies do not exceed the average values by more than 30%) and unstable relation with local geological structures. Our method allows to confidently determine the location and boundaries of local geological structures that are likely to contain mineral deposits. Maps of the fields of lineament distortion (residual specific density) created by our method are characterized by high contrast with anomalies exceeding the average by upward of 200%, and stable correlation to local geological structures containing mineral deposits. Our method considers a lineament grid as a general lineaments field – surface manifestation of stress and strain fields of Earth associated with geological structures of global, regional and local scales. Each of these structures has its own field of brittle dislocations that appears on the surface of its lineament field. Our method allows singling out local components by suppressing global and regional components of the general lineaments field. The remaining local lineament field is an indicator of local geological structures.The following are some of the examples of the method application: 1. Srednevilyuiskoye gas condensate field (Yakutia) - a direct proof of the effectiveness of methodology; 2. Structure of Astronomy (Taimyr) - confirmed by the seismic survey; 3. Active gold mine of Kadara (Chita Region) – confirmed by geochemistry; 4. Active gold mine of Davenda (Yakutia) - determined the boundaries of the granite massif that controls mineralization; 5. Object, promising to search for hydrocarbons in the north of Algeria - correlated with the results of geological, geochemical and geophysical surveys. For both Kadara and Davenda, the method demonstrated that the intensive anomalies of the local lineament fields are consistent with the geochemical anomalies and indicate the presence of the gold content at commercial levels. Our method of suppression of global and regional components results in isolating a local lineament field. In early stages of a geological exploration for oil and gas, this allows determining boundaries of various geological structures with very high reliability. Therefore, our method allows optimization of placement of seismic profile and exploratory drilling equipment, and this leads to a reduction of costs of prospecting and exploration of deposits, as well as acceleration of its commissioning.

Keywords: lineaments, mineral exploration, oil and gas, remote sensing

Procedia PDF Downloads 304
343 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 346
342 Performance Evaluation of Routing Protocols in Vehicular Adhoc Networks

Authors: Salman Naseer, Usman Zafar, Iqra Zafar

Abstract:

This study explores the implication of Vehicular Adhoc Network (VANET) - in the rural and urban scenarios that is one domain of Mobile Adhoc Network (MANET). VANET provides wireless communication between vehicle to vehicle and also roadside units. The Federal Commission Committee of United States of American has been allocated 75 MHz of the spectrum band in the 5.9 GHz frequency range for dedicated short-range communications (DSRC) that are specifically designed to enhance any road safety applications and entertainment/information applications. There are several vehicular related projects viz; California path, car 2 car communication consortium, the ETSI, and IEEE 1609 working group that have already been conducted to improve the overall road safety or traffic management. After the critical literature review, the selection of routing protocols is determined, and its performance was well thought-out in the urban and rural scenarios. Numerous routing protocols for VANET are applied to carry out current research. Its evaluation was conceded with the help of selected protocols through simulation via performance metric i.e. throughput and packet drop. Excel and Google graph API tools are used for plotting the graphs after the simulation results in order to compare the selected routing protocols which result with each other. In addition, the sum of the output from each scenario was computed to undoubtedly present the divergence in results. The findings of the current study present that DSR gives enhanced performance for low packet drop and high throughput as compared to AODV and DSDV in an urban congested area and in rural environments. On the other hand, in low-density area, VANET AODV gives better results as compared to DSR. The worth of the current study may be judged as the information exchanged between vehicles is useful for comfort, safety, and entertainment. Furthermore, the communication system performance depends on the way routing is done in the network and moreover, the routing of the data based on protocols implement in the network. The above-presented results lead to policy implication and develop our understanding of the broader spectrum of VANET.

Keywords: AODV, DSDV, DSR, Adhoc network

Procedia PDF Downloads 286
341 Determining Disparities in the Distribution of the Energy Efficiency Resource through the History of Michigan Policy

Authors: M. Benjamin Stacey

Abstract:

Energy efficiency has been increasingly recognized as a high value resource through state policies that require utility companies to implement efficiency programs. While policymakers have recognized the statewide economic, environmental, and health related value to residents who rely on this grid supplied resource, varying interests in energy efficiency between socioeconomic groups stands undifferentiated in most state legislation. Instead, the benefits are oftentimes assumed to be distributed equitably across these groups. Despite this fact, these policies are frequently sited by advocacy groups, regulatory bodies and utility companies for their ability to address the negative financial, health and other social impacts of energy poverty in low income communities. Yet, while most states like Michigan require programs that target low income consumers, oftentimes no requirements exist for the equitable investment and energy savings for low income consumers, nor does it stipulate minimal spending levels on low income programs. To further understand the impact of the absence of these factors in legislation, this study examines the distribution of program funds and energy efficiency savings to answer a fundamental energy justice concern; Are there disparities in the investment and benefits of energy efficiency programs between socioeconomic groups? This study compiles data covering the history of Michigan’s Energy Efficiency policy implementation from 2010-2016, analyzing the energy efficiency portfolios of Michigan’s two main energy providers. To make accurate comparisons between these two energy providers' investments and energy savings in low and non-low income programs, the socioeconomic variation for each utility coverage area was captured and accounted for using GIS and US Census data. Interestingly, this study found that both providers invested more equitably in natural gas efficiency programs, however, together these providers invested roughly three times less per household in low income electricity efficiency programs, which resulted in ten times less electricity savings per household. This study also compares variation in commission approved utility plans and actual spending and savings results, with varying patterns pointing to differing portfolio management strategies between companies. This study reveals that for the history of the implementation of Michigan’s Energy Efficiency Policy, that the 35% of Michigan’s population who qualify as low income have received substantially disproportionate funding and energy savings because of the policy. This study provides an overview of results from a social perspective, raises concerns about the impact on energy poverty and equity between consumer groups and is an applicable tool for law makers, regulatory agencies, utility portfolio managers, and advocacy groups concerned with addressing issues related to energy poverty.

Keywords: energy efficiency, energy justice, low income, state policy

Procedia PDF Downloads 187
340 Cross-Sectional Study Investigating the Prevalence of Uncorrected Refractive Error and Visual Acuity through Mobile Vision Screening in the Homeless in Wales

Authors: Pakinee Pooprasert, Wanxin Wang, Tina Parmar, Dana Ahnood, Tafadzwa Young-Zvandasara, James Morgan

Abstract:

Homelessness has been shown to be correlated to poor health outcomes, including increased visual health morbidity. Despite this, there are relatively few studies regarding visual health in the homeless population, especially in the UK. This research aims to investigate visual disability and access barriers prevalent in the homeless population in Cardiff, South Wales. Data was collected from 100 homeless participants in three different shelters. Visual outcomes included near and distance visual acuity as well as non-cycloplegic refraction. Qualitative data was collected via a questionnaire and included socio-demographic profile, ocular history, subjective visual acuity and level of access to healthcare facilities. Based on the participants’ presenting visual acuity, the total prevalence of myopia and hyperopia was 17.0% and 19.0% respectively based on spherical equivalent from the eye with the greatest absolute value. The prevalence of astigmatism was 8.0%. The mean absolute spherical equivalent was 0.841D and 0.853D for right and left eye respectively. The number of participants with sight loss (as defined by VA= 6/12-6/60 in the better-seeing eye) was 27.0% in comparison to 0.89% and 1.1% in the general Cardiff and Wales population respectively (p-value is < 0.05). Additionally, 1.0% of the homeless subjects were registered blind (VA less than 3/60), in comparison to 0.17% for the national consensus after age standardization. Most participants had good knowledge regarding access to prescription glasses and eye examination services. Despite this, 85.0% never had their eyes examined by a doctor and 73.0% had their last optometrist appointment in more than 5 years. These findings suggested that there was a significant disparity in ocular health, including visual acuity and refractive error amongst the homeless in comparison to the general population. Further, the homeless were less likely to receive the same level of support and continued care in the community due to access barriers. These included a number of socio-economic factors such as travel expenses and regional availability of services, as well as administrative shortcomings. In conclusion, this research demonstrated unmet visual health needs within the homeless, and that inclusive policy changes may need to be implemented for better healthcare outcomes within this marginalized community.

Keywords: homelessness, refractive error, visual disability, Wales

Procedia PDF Downloads 172
339 Impact of Preksha Meditation on Academic Anxiety of Female Teenagers

Authors: Neelam Vats, Madhvi Pathak Pillai, Rajender Lal, Indu Dabas

Abstract:

The pressure of scoring higher marks to be able to get admission in a higher ranked institution has become a social stigma for school students. It leads to various social and academic pressures on them, causing psychological anxiety. This undue stress on students sometimes may even steer to aggressive behavior or suicidal tendencies. Human mind is always surrounded by the some desires, emotions and passions, which usually disturbs our mental peace. In such a scenario, we look for a solution that helps in removing all the obstacles of mind and make us mentally peaceful and strong enough to be able to deal with all kind of pressure. Preksha meditation is one such technique which aims at bringing the positive changes for overall transformation of personality. Hence, the present study was undertaken to assess the impact of Preksha Meditation on the academic anxiety on female teenagers. The study was conducted on 120 high school students from the capital city of India. All students were in the age group of 13-15 years. They also belonged to similar social as well as economic status. The sample was equally divided into two groups i.e. experimental group (N = 60) and control group (N = 60). Subjects of the experimental group were given the intervention of Preksha Meditation practice by the trained instructor for one hour per day, six days a week, for three months for the first experimental stage and another three months for the second experimental stage. The subjects of the control group were not assigned any specific type of activity rather they continued doing their normal official activities as usual. The Academic Anxiety Scale was used to collect data during multi-level stages i.e. pre-experimental stage, post-experimental stage phase-I, and post-experimental stage phase-II. The data were statistically analyzed by computing the two-tailed-‘t’ test for inter group comparison and Sandler’s ‘A’ test with alpha = or p < 0.05 for intra-group comparisons. The study concluded that the practice for longer duration of Preksha Meditation practice brings about very significant and beneficial changes in the pattern of academic anxiety.

Keywords: academic anxiety, academic pressure, Preksha, meditation

Procedia PDF Downloads 131
338 Performance and Specific Emissions of an SI Engine Using Anhydrous Ethanol–Gasoline Blends in the City of Bogota

Authors: Alexander García Mariaca, Rodrigo Morillo Castaño, Juan Rolón Ríos

Abstract:

The government of Colombia has promoted the use of biofuels in the last 20 years through laws and resolutions, which regulate their use, with the objective to improve the atmospheric air quality and to promote Colombian agricultural industry. However, despite the use of blends of biofuels with fossil fuels, the air quality in large cities does not get better, this deterioration in the air is mainly caused by mobile sources that working with spark ignition internal combustion engines (SI-ICE), operating with a mixture in volume of 90 % gasoline and 10 % ethanol called E10, that for the case of Bogota represent 84 % of the fleet. Another problem is that Colombia has big cities located above 2200 masl and there are no accurate studies on the impact that the E10 mixture could cause in the emissions and performance of SI-ICE. This study aims to establish the optimal blend between gasoline ethanol in which an SI engine operates more efficiently in urban centres located at 2600 masl. The test was developed on SI engine four-stroke, single cylinder, naturally aspirated and with carburettor for the fuel supply using blends of gasoline and anhydrous ethanol in different ratios E10, E15, E20, E40, E60, E85 and E100. These tests were conducted in the city of Bogota, which is located at 2600 masl, with the engine operating at 3600 rpm and at 25, 50, 75 and 100% of load. The results show that the performance variables as engine brake torque, brake power and brake thermal efficiency decrease, while brake specific fuel consumption increases with the rise in the percentage of ethanol in the mixture. On the other hand, the specific emissions of CO2 and NOx present increases while specific emissions of CO and HC decreases compared to those produced by gasoline. From the tests, it is concluded that the SI-ICE worked more efficiently with the E40 mixture, where was obtained an increases of the brake power of 8.81 % and a reduction on brake specific fuel consumption of 2.5 %, coupled with a reduction in the specific emissions of CO2, HC and CO in 9.72, 52.88 and 76.66 % respectively compared to the results obtained with the E10 blend. This behaviour is because the E40 mixture provides the appropriate amount of the oxygen for the combustion process, which leads to better utilization of available energy in this process, thus generating a comparable power output to the E10 mixing and producing lower emissions CO and HC with the other test blends. Nevertheless, the emission of NOx increases in 106.25 %.

Keywords: emissions, ethanol, gasoline, engine, performance

Procedia PDF Downloads 323
337 Applying Integrated QFD-MCDM Approach to Strengthen Supply Chain Agility for Mitigating Sustainable Risks

Authors: Enes Caliskan, Hatice Camgoz Akdag

Abstract:

There is no doubt that humanity needs to realize the sustainability problems in the world and take serious action regarding that. All members of the United Nations adopted the 2030 Agenda for Sustainable Development, the most comprehensive study on sustainability internationally, in 2015. The summary of the study is 17 sustainable development goals. It covers everything about sustainability, such as environment, society and governance. The use of Information and Communication Technology (ICT), such as the Internet, mobile phones, and satellites, is essential for tackling the main issues facing sustainable development. Hence, the contributions of 3 major ICT companies to the sustainable development goals are assessed in this study. Quality Function Deployment (QFD) is utilized as a methodology for this study. Since QFD is an excellent instrument for comparing businesses on relevant subjects, a House of Quality must be established to complete the QFD application. In order to develop a House of Quality, the demanded qualities (voice of the customer) and quality characteristics (technical requirements) must first be determined. UN SDGs are used as demanded qualities. Quality characteristics are derived from annual sustainability and corporate social responsibility reports of ICT companies. The companies' efforts, as indicated by the QFD results, are concentrated on the use of recycled raw materials and recycling, reducing GHG emissions through energy saving and improved connectivity, decarbonizing the value chain, protecting the environment and water resources by collaborating with businesses that have completed CDP water assessments and paying attention to reducing water consumption, ethical business practices, and reducing inequality. The evaluations of the three businesses are found to be very similar when they are compared. The small differences between the companies are usually about the region they serve. Efforts made by the companies mostly concentrate on responsible consumption and production, life below water, climate action, and sustainable cities and community goals. These efforts include improving connectivity in needed areas for providing access to information, education and healthcare.

Keywords: multi-criteria decision-making, sustainable supply chain risk, supply chain agility, quality function deployment, Sustainable development goals

Procedia PDF Downloads 47
336 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge

Authors: K. Eryilmaz, G. Mercanoglu

Abstract:

Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.

Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis

Procedia PDF Downloads 160
335 An Unusual Cause of Electrocardiographic Artefact: Patient's Warming Blanket

Authors: Sanjay Dhiraaj, Puneet Goyal, Aditya Kapoor, Gaurav Misra

Abstract:

In electrocardiography, an ECG artefact is used to indicate something that is not heart-made. Although technological advancements have produced monitors with the potential of providing accurate information and reliable heart rate alarms, despite this, interference of the displayed electrocardiogram still occurs. These interferences can be from the various electrical gadgets present in the operating room or electrical signals from other parts of the body. Artefacts may also occur due to poor electrode contact with the body or due to machine malfunction. Knowing these artefacts is of utmost importance so as to avoid unnecessary and unwarranted diagnostic as well as interventional procedures. We report a case of ECG artefacts occurring due to patient warming blanket and its consequences. A 20-year-old male with a preoperative diagnosis of exstrophy epispadias complex was posted for surgery under epidural and general anaesthesia. Just after endotracheal intubation, we observed nonspecific ECG changes on the monitor. At a first glance, the monitor strip revealed broad QRs complexes suggesting a ventricular bigeminal rhythm. Closer analysis revealed these to be artefacts because although the complexes were looking broad on the first glance there was clear presence of normal sinus complexes which were immediately followed by 'broad complexes' or artefacts produced by some device or connection. These broad complexes were labeled as artefacts as they were originating in the absolute refractory period of the previous normal sinus beat. It would be physiologically impossible for the myocardium to depolarize so rapidly as to produce a second QRS complex. A search for the possible reason for the artefacts was made and after deepening the plane of anaesthesia, ruling out any possible electrolyte abnormalities, checking of ECG leads and its connections, changing monitors, checking all other monitoring connections, checking for proper grounding of anaesthesia machine and OT table, we found that after switching off the patient’s warming apparatus the rhythm returned to a normal sinus one and the 'broad complexes' or artefacts disappeared. As misdiagnosis of ECG artefacts may subject patients to unnecessary diagnostic and therapeutic interventions so a thorough knowledge of the patient and monitors allow for a quick interpretation and resolution of the problem.

Keywords: ECG artefacts, patient warming blanket, peri-operative arrhythmias, mobile messaging services

Procedia PDF Downloads 272
334 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 106
333 The Competitiveness of Small and Medium Sized Enterprises: Digital Transformation of Business Models

Authors: Chante Van Tonder, Bart Bossink, Chris Schachtebeck, Cecile Nieuwenhuizen

Abstract:

Small and Medium-Sized Enterprises (SMEs) play a key role in national economies around the world, being contributors to economic and social well-being. Due to this, the success, growth and competitiveness of SMEs are critical. However, there are many factors that undermine this, such as resource constraints, poor information communication infrastructure (ICT), skills shortages and poor management. The Fourth Industrial Revolution offers new tools and opportunities such as digital transformation and business model innovation (BMI) to the SME sector to enhance its competitiveness. Adopting and leveraging digital technologies such as cloud, mobile technologies, big data and analytics can significantly improve business efficiencies, value proposition and customer experiences. Digital transformation can contribute to the growth and competitiveness of SMEs. However, SMEs are lagging behind in the participation of digital transformation. Extant research lacks conceptual and empirical research on how digital transformation drives BMI and the impact it has on the growth and competitiveness of SMEs. The purpose of the study is, therefore, to close this gap by developing and empirically validating a conceptual model to determine if SMEs are achieving BMI through digital transformation and how this is impacting the growth, competitiveness and overall business performance. An empirical study is being conducted on 300 SMEs, consisting of 150 South-African and 150 Dutch SMEs, to achieve this purpose. Structural equation modeling is used, since it is a multivariate statistical analysis technique that is used to analyse structural relationships and is a suitable research method to test the hypotheses in the model. Empirical research is needed to gather more insight into how and if SMEs are digitally transformed and how BMI can be driven through digital transformation. The findings of this study can be used by SME business owners, managers and employees at all levels. The findings will indicate if digital transformation can indeed impact the growth, competitiveness and overall performance of an SME, reiterating the importance and potential benefits of adopting digital technologies. In addition, the findings will also exhibit how BMI can be achieved in light of digital transformation. This study contributes to the body of knowledge in a highly relevant and important topic in management studies by analysing the impact of digital transformation on BMI on a large number of SMEs that are distinctly different in economic and cultural factors

Keywords: business models, business model innovation, digital transformation, SMEs

Procedia PDF Downloads 240
332 Promoting Public Participation in the Digital Memory Project: Experience from My Peking Memory Project(MPMP)

Authors: Xiaoshuang Jia, Huiling Feng, Li Niu, Wei Hai

Abstract:

Led by Humanistic Beijing Studies Center in Renmin University of China, My Peking Memory Project(MPMP) is a long-time digital memory project under guarantee of public participation to enable the cultural and intellectual memory of Beijing to be collected, organized, preserved and promoted for discovery and research. Taking digital memory as a new way, MPMP is an important part of Peking Memory Project(PMP) which is aimed at using digital technologies to protect and (re)present the cultural heritage in Beijing. The key outcome of MPMP is the co-building of a total digital collection of knowledge assets about Beijing. Institutional memories are central to Beijing’s collection and consist of the official published documentary content of Beijing. These have already fall under the archival collection purview. The advances in information and communication technology and the knowledge form social memory theory have allowed us to collect more comprehensively beyond institutional collections. It is now possible to engage citizens on a large scale to collect private memories through crowdsourcing in digital formats. Private memories go beyond official published content to include personal narratives, some of which are just in people’s minds until they are captured by MPMP. One aim of MPMP is to engage individuals, communities, groups or institutions who have formed memories and content about Beijing, and would like to contribute them. The project hopes to build a culture of remembering and it believes ‘Every Memory Matters’. Digital memory contribution was achieved through the development of the MPMP. In reducing barriers to digital contribution and promoting high public Participation, MPMP has taken explored the harvesting of transcribe service for digital ingestion, mobile platform and some off-line activities like holding social forum. MPMP has also cooperated with the ‘Implementation Plan of Support Plan for Growth of Talents in Renmin University of China’ to get manpower and intellectual support. After six months of operation, now MPMP have more than 2000 memories added and 7 Special Memory Collections now online. The work of MPMP has ultimately helped to highlight the important role in safeguarding the documentary heritage and intellectual memory of Beijing.

Keywords: digital memory, public participation, MPMP, cultural heritage, collection

Procedia PDF Downloads 169
331 Development of Peaceful Wellbeing in Executive Practitioners through Mindfulness-Based Practices

Authors: Narumon Jiwattanasuk, Phrakrupalad Pannavoravat, Pataraporn Sirikanchana

Abstract:

Mindfulness has become a perspective addressing positive wellbeing these days. The aims of this paper are to analyze the problems of executive meditation practitioners at the Buddhamahametta Foundation in Thailand and to provide recommendations on the process to develop peaceful wellbeing in executive meditation practitioners by applying the principles of the four foundations of mindfulness. This study is particularly focused on executives because there is not much research focusing on the well-being development of executives, and the researcher recognizes that executives can be an example within their organizations. This would be a significant influence on their employees and their families to be interested in practicing mindfulness. This improvement will then grow from an individual to the surrounding community such as family, workplace, society, and the nation. This would lead to happiness at the national level, which is the expectation of this research. The paper highlights mindfulness practices that can be performed on a daily basis. This study is qualitative research, and there are 10 key participants who are executives from various sectors such as hospitality, healthcare, retail, power energy, and so on. Three mindfulness-based courses were conducted over a period of 8 months, and in-depth interviews were done before the first course as well as at the end of every course. In total, four in-depth interviews were conducted. The information collected from the interviews was analyzed in order to create the process to develop peaceful well-being. Focus group discussions with the mindfulness specialists were conducted to help develop the mindfulness program as well. As a result of this research, it is found that the executives faced the following problems: stress, negative thinking loops, losing temper, seeking acceptance, worry about uncontrollable external factors, unable to control their words, and weight gain. The cultivation of the four foundations of mindfulness can develop peaceful wellbeing. The results showed that after the key informant executives attended the mindfulness courses and practiced mindfulness regularly, they have developed peaceful well-being in all aspects such as physical, psychological, behavioral, and intellectual by applying 12 mindfulness-based activities. The development of wellbeing, in the conclusion of this study, also includes various tools to support the continuing practice, including the handout of guided mindfulness practice, VDO clips about mindfulness practice, the online dhamma channel, and mobile applications to support regular mindfulness-based practices.

Keywords: executive, mindfulness activities, stress, wellbeing

Procedia PDF Downloads 120
330 Study on Control Techniques for Adaptive Impact Mitigation

Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty

Abstract:

Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.

Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber

Procedia PDF Downloads 90
329 Digital Twins: Towards an Overarching Framework for the Built Environment

Authors: Astrid Bagireanu, Julio Bros-Williamson, Mila Duncheva, John Currie

Abstract:

Digital Twins (DTs) have entered the built environment from more established industries like aviation and manufacturing, although there has never been a common goal for utilising DTs at scale. Defined as the cyber-physical integration of data between an asset and its virtual counterpart, DT has been identified in literature from an operational standpoint – in addition to monitoring the performance of a built asset. However, this has never been translated into how DTs should be implemented into a project and what responsibilities each project stakeholder holds in the realisation of a DT. What is needed is an approach to translate these requirements into actionable DT dimensions. This paper presents a foundation for an overarching framework specific to the built environment. For the purposes of this research, the UK widely used the Royal Institute of British Architects (RIBA) Plan of Work from 2020 is used as a basis for itemising project stages. The RIBA Plan of Work consists of eight stages designed to inform on the definition, briefing, design, coordination, construction, handover, and use of a built asset. Similar project stages are utilised in other countries; therefore, the recommendations from the interviews presented in this paper are applicable internationally. Simultaneously, there is not a single mainstream software resource that leverages DT abilities. This ambiguity meets an unparalleled ambition from governments and industries worldwide to achieve a national grid of interconnected DTs. For the construction industry to access these benefits, it necessitates a defined starting point. This research aims to provide a comprehensive understanding of the potential applications and ramifications of DT in the context of the built environment. This paper is an integral part of a larger research aimed at developing a conceptual framework for the Architecture, Engineering, and Construction (AEC) sector following a conventional project timeline. Therefore, this paper plays a pivotal role in providing practical insights and a tangible foundation for developing a stage-by-stage approach to assimilate the potential of DT within the built environment. First, the research focuses on a review of relevant literature, albeit acknowledging the inherent constraint of limited sources available. Secondly, a qualitative study compiling the views of 14 DT experts is presented, concluding with an inductive analysis of the interview findings - ultimately highlighting the barriers and strengths of DT in the context of framework development. As parallel developments aim to progress net-zero-centred design and improve project efficiencies across the built environment, the limited resources available to support DTs should be leveraged to propel the industry to reach its digitalisation era, in which AEC stakeholders have a fundamental role in understanding this, from the earliest stages of a project.

Keywords: digital twins, decision-making, design, net-zero, built environment

Procedia PDF Downloads 122
328 The Relationship between Central Bank Independence and Inflation: Evidence from Africa

Authors: R. Bhattu Babajee, Marie Sandrine Estelle Benoit

Abstract:

The past decades have witnessed a considerable institutional shift towards Central Bank Independence across economies of the world. The motivation behind such a change is the acceptance that increased central bank autonomy has the power of alleviating inflation bias. Hence, studying whether Central Bank Independence acts as a significant factor behind the price stability in the African economies or whether this macroeconomic aim in these countries result from other economic, political or social factors is a pertinent issue. The main research objective of this paper is to assess the relationship between central bank autonomy and inflation in African economies where inflation has proved to be a serious problem. In this optic, we shall measure the degree of CBI in Africa by computing the turnover rates of central banks governors thereby studying whether decisions made by African central banks are affected by external forces. The purpose of this study is to investigate empirically the association between Central Bank Independence (CBI) and inflation for 10 African economies over a period of 17 years, from 1995 to 2012. The sample includes Botswana, Egypt, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Nigeria, South Africa, and Uganda. In contrast to empirical research, we have not been using the usual static panel model for it is associated with potential mis specification arising from the absence of dynamics. To this issue a dynamic panel data model which integrates several control variables has been used. Firstly, the analysis includes dynamic terms to explain the tenacity of inflation. Given the confirmation of inflation inertia, that is very likely in African countries there exists the need for including lagged inflation in the empirical model. Secondly, due to known reverse causality between Central Bank Independence and inflation, the system generalized method of moments (GMM) is employed. With GMM estimators, the presence of unknown forms of heteroskedasticity is admissible as well as auto correlation in the error term. Thirdly, control variables have been used to enhance the efficiency of the model. The main finding of this paper is that central bank independence is negatively associated with inflation even after including control variables.

Keywords: central bank independence, inflation, macroeconomic variables, price stability

Procedia PDF Downloads 364
327 The Study of Cost Accounting in S Company Based on TDABC

Authors: Heng Ma

Abstract:

Third-party warehousing logistics has an important role in the development of external logistics. At present, the third-party logistics in our country is still a new industry, the accounting system has not yet been established, the current financial accounting system of third-party warehousing logistics is mainly in the traditional way of thinking, and only able to provide the total cost information of the entire enterprise during the accounting period, unable to reflect operating indirect cost information. In order to solve the problem of third-party logistics industry cost information distortion, improve the level of logistics cost management, the paper combines theoretical research and case analysis method to reflect cost allocation by building third-party logistics costing model using Time-Driven Activity-Based Costing(TDABC), and takes S company as an example to account and control the warehousing logistics cost. Based on the idea of “Products consume activities and activities consume resources”, TDABC put time into the main cost driver and use time-consuming equation resources assigned to cost objects. In S company, the objects focuses on three warehouse, engaged with warehousing and transportation (the second warehouse, transport point) service. These three warehouse respectively including five departments, Business Unit, Production Unit, Settlement Center, Security Department and Equipment Division, the activities in these departments are classified by in-out of storage forecast, in-out of storage or transit and safekeeping work. By computing capacity cost rate, building the time-consuming equation, the paper calculates the final operation cost so as to reveal the real cost. The numerical analysis results show that the TDABC can accurately reflect the cost allocation of service customers and reveal the spare capacity cost of resource center, verifies the feasibility and validity of TDABC in third-party logistics industry cost accounting. It inspires enterprises focus on customer relationship management and reduces idle cost to strengthen the cost management of third-party logistics enterprises.

Keywords: third-party logistics enterprises, TDABC, cost management, S company

Procedia PDF Downloads 358
326 Shedding Light on the Black Box: Explaining Deep Neural Network Prediction of Clinical Outcome

Authors: Yijun Shao, Yan Cheng, Rashmee U. Shah, Charlene R. Weir, Bruce E. Bray, Qing Zeng-Treitler

Abstract:

Deep neural network (DNN) models are being explored in the clinical domain, following the recent success in other domains such as image recognition. For clinical adoption, outcome prediction models require explanation, but due to the multiple non-linear inner transformations, DNN models are viewed by many as a black box. In this study, we developed a deep neural network model for predicting 1-year mortality of patients who underwent major cardio vascular procedures (MCVPs), using temporal image representation of past medical history as input. The dataset was obtained from the electronic medical data warehouse administered by Veteran Affairs Information and Computing Infrastructure (VINCI). We identified 21,355 veterans who had their first MCVP in 2014. Features for prediction included demographics, diagnoses, procedures, medication orders, hospitalizations, and frailty measures extracted from clinical notes. Temporal variables were created based on the patient history data in the 2-year window prior to the index MCVP. A temporal image was created based on these variables for each individual patient. To generate the explanation for the DNN model, we defined a new concept called impact score, based on the presence/value of clinical conditions’ impact on the predicted outcome. Like (log) odds ratio reported by the logistic regression (LR) model, impact scores are continuous variables intended to shed light on the black box model. For comparison, a logistic regression model was fitted on the same dataset. In our cohort, about 6.8% of patients died within one year. The prediction of the DNN model achieved an area under the curve (AUC) of 78.5% while the LR model achieved an AUC of 74.6%. A strong but not perfect correlation was found between the aggregated impact scores and the log odds ratios (Spearman’s rho = 0.74), which helped validate our explanation.

Keywords: deep neural network, temporal data, prediction, frailty, logistic regression model

Procedia PDF Downloads 153
325 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic

Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx

Abstract:

Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.

Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM

Procedia PDF Downloads 202
324 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago

Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu

Abstract:

Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.

Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago

Procedia PDF Downloads 50
323 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 157
322 Application of Neutron-Gamma Technologies for Soil Elemental Content Determination and Mapping

Authors: G. Yakubova, A. Kavetskiy, S. A. Prior, H. A. Torbert

Abstract:

In-situ soil carbon determination over large soil surface areas (several hectares) is required in regard to carbon sequestration and carbon credit issues. This capability is important for optimizing modern agricultural practices and enhancing soil science knowledge. Collecting and processing representative field soil cores for traditional laboratory chemical analysis is labor-intensive and time-consuming. The neutron-stimulated gamma analysis method can be used for in-situ measurements of primary elements in agricultural soils (e.g., Si, Al, O, C, Fe, and H). This non-destructive method can assess several elements in large soil volumes with no need for sample preparation. Neutron-gamma soil elemental analysis utilizes gamma rays issued from different neutron-nuclei interactions. This process has become possible due to the availability of commercial portable pulse neutron generators, high-efficiency gamma detectors, reliable electronics, and measurement/data processing software complimented by advances in state-of-the-art nuclear physics methods. In Pulsed Fast Thermal Neutron Analysis (PFTNA), soil irradiation is accomplished using a pulsed neutron flux, and gamma spectra acquisition occurs both during and between pulses. This method allows the inelastic neutron scattering (INS) gamma spectrum to be separated from the thermal neutron capture (TNC) spectrum. Based on PFTNA, a mobile system for field-scale soil elemental determinations (primarily carbon) was developed and constructed. Our scanning methodology acquires data that can be directly used for creating soil elemental distribution maps (based on ArcGIS software) in a reasonable timeframe (~20-30 hectares per working day). Created maps are suitable for both agricultural purposes and carbon sequestration estimates. The measurement system design, spectra acquisition process, strategy for acquiring field-scale carbon content data, and mapping of agricultural fields will be discussed.

Keywords: neutron gamma analysis, soil elemental content, carbon sequestration, carbon credit, soil gamma spectroscopy, portable neutron generators, ArcMap mapping

Procedia PDF Downloads 90
321 A Comparative Study of Motion Events Encoding in English and Italian

Authors: Alfonsina Buoniconto

Abstract:

The aim of this study is to investigate the degree of cross-linguistic and intra-linguistic variation in the encoding of motion events (MEs) in English and Italian, these being typologically different languages both showing signs of disobedience to their respective types. As a matter of fact, the traditional typological classification of MEs encoding distributes languages into two macro-types, based on the preferred locus for the expression of Path, the main ME component (other components being Figure, Ground and Manner) characterized by conceptual and structural prominence. According to this model, Satellite-framed (SF) languages typically express Path information in verb-dependent items called satellites (e.g. preverbs and verb particles) with main verbs encoding Manner of motion; whereas Verb-framed languages (VF) tend to include Path information within the verbal locus, leaving Manner to adjuncts. Although this dichotomy is valid altogether, languages do not always behave according to their typical classification patterns. English, for example, is usually ascribed to the SF type due to the rich inventory of postverbal particles and phrasal verbs used to express spatial relations (i.e. the cat climbed down the tree); nevertheless, it is not uncommon to find constructions such as the fog descended slowly, which is typical of the VF type. Conversely, Italian is usually described as being VF (cf. Paolo uscì di corsa ‘Paolo went out running’), yet SF constructions like corse via in lacrime ‘She ran away in tears’ are also frequent. This paper will try to demonstrate that such a typological overlapping is due to the fact that the semantic units making up MEs are distributed within several loci of the sentence –not only verbs and satellites– thus determining a number of different constructions stemming from convergent factors. Indeed, the linguistic expression of motion events depends not only on the typological nature of languages in a traditional sense, but also on a series morphological, lexical, and syntactic resources, as well as on inferential, discursive, usage-related, and cultural factors that make semantic information more or less accessible, frequent, and easy to process. Hence, rather than describe English and Italian in dichotomic terms, this study focuses on the investigation of cross-linguistic and intra-linguistic variation in the use of all the strategies made available by each linguistic system to express motion. Evidence for these assumptions is provided by parallel corpora analysis. The sample texts are taken from two contemporary Italian novels and their respective English translations. The 400 motion occurrences selected (200 in English and 200 in Italian) were scanned according to the MODEG (an acronym for Motion Decoding Grid) methodology, which grants data comparability through the indexation and retrieval of combined morphosyntactic and semantic information at different levels of detail.

Keywords: construction typology, motion event encoding, parallel corpora, satellite-framed vs. verb-framed type

Procedia PDF Downloads 260