Search results for: lead time variability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21570

Search results for: lead time variability

20310 The Impact of Community Settlement on Leisure Time Use and Body Composition in Determining Physical Lifestyles among Women

Authors: Mawarni Mohamed, Sharifah Shahira A. Hamid

Abstract:

Leisure time is an important component to offset the sedentary lifestyle of the people. Women tend to benefit from leisure activities not only to reduce stress but also to provide opportunities for well-being and self-satisfaction. This study was conducted to investigate body composition and leisure time use among women in Selangor from the influences of community settlement. A total of 419 women aged 18-65 years were selected to participate in this study. Descriptive statistics, t-test and ANOVA were used to analyze the level of physical activity and the relationship between leisure-time use and body composition were made to analyze the physical lifestyles. The results showed that women with normal body composition seem to be involved in more passive activities than women with less weight gain and obesity. Thus, the study recommended that the government and other health and recreational agencies should develop more places and activities suitable for leisure preference for women in their community settlement so they become more interested to engage in more active recreational and physical activities.

Keywords: body composition, community settlement, leisure time, physical lifestyles

Procedia PDF Downloads 447
20309 Respiratory Indices and Sports Performance: A Comparision between Different Levels Basketballers

Authors: Ranjan Chakravarty, Satpal Yadav, Biswajit Basumatary, Arvind S. Sajwan

Abstract:

The purpose of this study is to compare the basketball players of different level on selected respiratory indices. Ninety male basketball players from different universities those who participated in intercollegiate and inter- varsity championship. Selected respiratory indices were resting pulse rate, resting blood pressure, vital capacity and resting respiratory rate. Mean and standard deviation of selected respiratory indices were calculated and three different levels i.e. beginners, intermediate and advanced were compared by using analysis of variance. In order to test the hypothesis, level of significance was set at 0.05. It was concluded that variability does not exist among the basketball players of different groups with respect to their selected respiratory indices i.e. resting pulse rate, resting blood pressure, vital capacity and resting respiratory rate.

Keywords: respiratory indices, sports performance, basketball players, intervarsity level

Procedia PDF Downloads 326
20308 Improving the Residence Time of a Rectangular Contact Tank by Varying the Geometry Using Numerical Modeling

Authors: Yamileth P. Herrera, Ronald R. Gutierrez, Carlos, Pacheco-Bustos

Abstract:

This research aims at the numerical modeling of a rectangular contact tank in order to improve the hydrodynamic behavior and the retention time of the water to be treated with the disinfecting agent. The methodology to be followed includes a hydraulic analysis of the tank to observe the fluid velocities, which will allow evidence of low-speed areas that may generate pathogenic agent incubation or high-velocity areas, which may decrease the optimal contact time between the disinfecting agent and the microorganisms to be eliminated. Based on the results of the numerical model, the efficiency of the tank under the geometric and hydraulic conditions considered will be analyzed. This would allow the performance of the tank to be improved before starting a construction process, thus avoiding unnecessary costs.

Keywords: contact tank, numerical models, hydrodynamic modeling, residence time

Procedia PDF Downloads 161
20307 Smart Growth Through Innovation Programs: Challenges and Opportunities

Authors: Hanadi Mubarak Al-Mubaraki, Michael Busler

Abstract:

Innovation is the powerful tools for economic growth and diversification, which lead to smart growth. The objective of this paper is to identify the opportunities and challenges of innovation programs discuss and analyse the implementation of the innovation program in the United States (US) and United Kingdom (UK). To achieve the objectives, the research used a mixed methods approach, quantitative (survey), and qualitative (multi-case study) to examine innovation best practices in developed countries. In addition, the selection of 4 interview case studies of innovation organisations based on the best practices and successful implementation worldwide. The research findings indicated the two challenges such as 1) innovation required business ecosystem support to deliver innovation outcomes such as new product and new services, and 2) foster the climate of innovation &entrepreneurship for economic growth and diversification. Although the two opportunities such as 1) sustainability of the innovation events which lead smart growth, and 2) establish the for fostering the artificial intelligence hub entrepreneurship networking at multi-levels. The research adds value to academicians and practitioners such as government, funded organizations, institutions, and policymakers. The authors aim to conduct future research a comparative study of innovation case studies between developed and developing countries for policy implications worldwide. The Originality of This study contributes to current literature about the innovation best practice in developed and developing countries.

Keywords: economic development, technology transfer, entrepreneurship, innovation program

Procedia PDF Downloads 137
20306 Exploration of Bullying Perceptions in Adolescents in Sekolah Menengah Kejuruan Negeri 1 Manado

Authors: Madjid Nancy, Rakinaung Natalia, Lumowa Fresy

Abstract:

Background: Bullying becomes one of the problems that concern the world of education, especially in adolescents, which has a negative impact on learning achievement, psychology, and physical health. The psychological impact is shame, depression, distress, fear, sadness, and anxiety, so that if prolonged leave can lead to depression in the victim. While the impact on physical health in the form of bruises on the hit area, blisters, swelling and in more severe cases will lead to death. Objectives: This study aims to explore the perception of bullying in adolescent students Sekolah Menengah Kejuruan (SMK) Negeri 1 Manado and the people associated with that adolescent students. Methods: This research uses descriptive qualitative research design and using thematic analysis, and supported by Urie Bronfenbrenner Ecological Framework. The data collection that will be used is by in-depth interview. Sampling using purposive sampling and snowball techniques. This research was conducted at SMK Negeri 1 Manado. Result: From the analysis obtained three themes with the categories: 1) the perception of bullying with categories are: Understanding of Bullying and The Impact of Bullying, 2) the originator of bullying with categories are: Fulfillment of Youth Development Tasks and Needs, Peers Influence, and Family Communication; 3) the effort to handle bullying with categories are: the Individual Coping and Teacher Role. Conclusion: This research get three themes, those are perception of bullying, bullying’s originator and the effort of handling bullying.

Keywords: adolscent, students, bullying, perception

Procedia PDF Downloads 128
20305 New Variational Approach for Contrast Enhancement of Color Image

Authors: Wanhyun Cho, Seongchae Seo, Soonja Kang

Abstract:

In this work, we propose a variational technique for image contrast enhancement which utilizes global and local information around each pixel. The energy functional is defined by a weighted linear combination of three terms which are called on a local, a global contrast term and dispersion term. The first one is a local contrast term that can lead to improve the contrast of an input image by increasing the grey-level differences between each pixel and its neighboring to utilize contextual information around each pixel. The second one is global contrast term, which can lead to enhance a contrast of image by minimizing the difference between its empirical distribution function and a cumulative distribution function to make the probability distribution of pixel values becoming a symmetric distribution about median. The third one is a dispersion term that controls the departure between new pixel value and pixel value of original image while preserving original image characteristics as well as possible. Second, we derive the Euler-Lagrange equation for true image that can achieve the minimum of a proposed functional by using the fundamental lemma for the calculus of variations. And, we considered the procedure that this equation can be solved by using a gradient decent method, which is one of the dynamic approximation techniques. Finally, by conducting various experiments, we can demonstrate that the proposed method can enhance the contrast of colour images better than existing techniques.

Keywords: color image, contrast enhancement technique, variational approach, Euler-Lagrang equation, dynamic approximation method, EME measure

Procedia PDF Downloads 442
20304 Driver Behavior Analysis and Inter-Vehicular Collision Simulation Approach

Authors: Lu Zhao, Nadir Farhi, Zoi Christoforou, Nadia Haddadou

Abstract:

The safety test of deploying intelligent connected vehicles (ICVs) on the road network is a critical challenge. Road traffic network simulation can be used to test the functionality of ICVs, which is not only time-saving and less energy-consuming but also can create scenarios with car collisions. However, the relationship between different human driver behaviors and the car-collision occurrences has been not understood clearly; meanwhile, the procedure of car-collisions generation in the traffic numerical simulators is not fully integrated. In this paper, we propose an approach to identify specific driver profiles from real driven data; then, we replicate them in numerical traffic simulations with the purpose of generating inter-vehicular collisions. We proposed three profiles: (i) 'aggressive': short time-headway, (ii) 'inattentive': long reaction time, and (iii) 'normal' with intermediate values of reaction time and time-headway. These three driver profiles are extracted from the NGSIM dataset and simulated using the intelligent driver model (IDM), with an extension of reaction time. At last, the generation of inter-vehicular collisions is performed by varying the percentages of different profiles.

Keywords: vehicular collisions, human driving behavior, traffic modeling, car-following models, microscopic traffic simulation

Procedia PDF Downloads 165
20303 Virtual Team Performance: A Transactive Memory System Perspective

Authors: Belbaly Nassim

Abstract:

Virtual teams (VT) initiatives, in which teams are geographically dispersed and communicate via modern computer-driven technologies, have attracted increasing attention from researchers and professionals. The growing need to examine how to balance and optimize VT is particularly important given the exposure experienced by companies when their employees encounter globalization and decentralization pressures to monitor VT performance. Hence, organization is regularly limited due to misalignment between the behavioral capabilities of the team’s dispersed competences and knowledge capabilities and how trust issues interplay and influence these VT dimensions and the effects of such exchanges. In fact, the future success of business depends on the extent to which VTs are managing efficiently their dispersed expertise, skills and knowledge to stimulate VT creativity. Transactive memory system (TMS) may enhance VT creativity using its three dimensons: knowledge specialization, credibility and knowledge coordination. TMS can be understood as a composition of both a structural component residing of individual knowledge and a set of communication processes among individuals. The individual knowledge is shared while being retrieved, applied and the learning is coordinated. TMS is driven by the central concept that the system is built on the distinction between internal and external memory encoding. A VT learns something new and catalogs it in memory for future retrieval and use. TMS uses the role of information technology to explain VT behaviors by offering VT members the possibility to encode, store, and retrieve information. TMS considers the members of a team as a processing system in which the location of expertise both enhances knowledge coordination and builds trust among members over time. We build on TMS dimensions to hypothesize the effects of specialization, coordination, and credibility on VT creativity. In fact, VTs consist of dispersed expertise, skills and knowledge that can positively enhance coordination and collaboration. Ultimately, this team composition may lead to recognition of both who has expertise and where that expertise is located; over time, the team composition may also build trust among VT members over time developing the ability to coordinate their knowledge which can stimulate creativity. We also assess the reciprocal relationship between TMS dimensions and VT creativity. We wish to use TMS to provide researchers with a theoretically driven model that is empirically validated through survey evidence. We propose that TMS provides a new way to enhance and balance VT creativity. This study also provides researchers insight into the use of TMS to influence positively VT creativity. In addition to our research contributions, we provide several managerial insights into how TMS components can be used to increase performance within dispersed VTs.

Keywords: virtual team creativity, transactive memory systems, specialization, credibility, coordination

Procedia PDF Downloads 156
20302 Analysis of Financial Time Series by Using Ornstein-Uhlenbeck Type Models

Authors: Md Al Masum Bhuiyan, Maria C. Mariani, Osei K. Tweneboah

Abstract:

In the present work, we develop a technique for estimating the volatility of financial time series by using stochastic differential equation. Taking the daily closing prices from developed and emergent stock markets as the basis, we argue that the incorporation of stochastic volatility into the time-varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. While using the technique, we see the long-memory behavior of data sets and one-step-ahead-predicted log-volatility with ±2 standard errors despite the variation of the observed noise from a Normal mixture distribution, because the financial data studied is not fully Gaussian. Also, the Ornstein-Uhlenbeck process followed in this work simulates well the financial time series, which aligns our estimation algorithm with large data sets due to the fact that this algorithm has good convergence properties.

Keywords: financial time series, maximum likelihood estimation, Ornstein-Uhlenbeck type models, stochastic volatility model

Procedia PDF Downloads 231
20301 Decentralized Peak-Shaving Strategies for Integrated Domestic Batteries

Authors: Corentin Jankowiak, Aggelos Zacharopoulos, Caterina Brandoni

Abstract:

In a context of increasing stress put on the electricity network by the decarbonization of many sectors, energy storage is likely to be the key mitigating element, by acting as a buffer between production and demand. In particular, the highest potential for storage is when connected closer to the loads. Yet, low voltage storage struggles to penetrate the market at a large scale due to the novelty and complexity of the solution, and the competitive advantage of fossil fuel-based technologies regarding regulations. Strong and reliable numerical simulations are required to show the benefits of storage located near loads and promote its development. The present study was restrained from excluding aggregated control of storage: it is assumed that the storage units operate independently to one another without exchanging information – as is currently mostly the case. A computationally light battery model is presented in detail and validated by direct comparison with a domestic battery operating in real conditions. This model is then used to develop Peak-Shaving (PS) control strategies as it is the decentralized service from which beneficial impacts are most likely to emerge. The aggregation of flatter, peak- shaved consumption profiles is likely to lead to flatter and arbitraged profile at higher voltage layers. Furthermore, voltage fluctuations can be expected to decrease if spikes of individual consumption are reduced. The crucial part to achieve PS lies in the charging pattern: peaks depend on the switching on and off of appliances in the dwelling by the occupants and are therefore impossible to predict accurately. A performant PS strategy must, therefore, include a smart charge recovery algorithm that can ensure enough energy is present in the battery in case it is needed without generating new peaks by charging the unit. Three categories of PS algorithms are introduced in detail. First, using a constant threshold or power rate for charge recovery, followed by algorithms using the State Of Charge (SOC) as a decision variable. Finally, using a load forecast – of which the impact of the accuracy is discussed – to generate PS. A performance metrics was defined in order to quantitatively evaluate their operating regarding peak reduction, total energy consumption, and self-consumption of domestic photovoltaic generation. The algorithms were tested on load profiles with a 1-minute granularity over a 1-year period, and their performance was assessed regarding these metrics. The results show that constant charging threshold or power are far from optimal: a certain value is not likely to fit the variability of a residential profile. As could be expected, forecast-based algorithms show the highest performance. However, these depend on the accuracy of the forecast. On the other hand, SOC based algorithms also present satisfying performance, making them a strong alternative when the reliable forecast is not available.

Keywords: decentralised control, domestic integrated batteries, electricity network performance, peak-shaving algorithm

Procedia PDF Downloads 108
20300 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 132
20299 Time's Arrow and Entropy: Violations to the Second Law of Thermodynamics Disrupt Time Perception

Authors: Jason Clarke, Michaela Porubanova, Angela Mazzoli, Gulsah Kut

Abstract:

What accounts for our perception that time inexorably passes in one direction, from the past to the future, the so-called arrow of time, given that the laws of physics permit motion in one temporal direction to also happen in the reverse temporal direction? Modern physics says that the reason for time’s unidirectional physical arrow is the relationship between time and entropy, the degree of disorder in the universe, which is evolving from low entropy (high order; thermal disequilibrium) toward high entropy (high disorder; thermal equilibrium), the second law of thermodynamics. Accordingly, our perception of the direction of time, from past to future, is believed to emanate as a result of the natural evolution of entropy from low to high, with low entropy defining our notion of ‘before’ and high entropy defining our notion of ‘after’. Here we explored this proposed relationship between entropy and the perception of time’s arrow. We predicted that if the brain has some mechanism for detecting entropy, whose output feeds into processes involved in constructing our perception of the direction of time, presentation of violations to the expectation that low entropy defines ‘before’ and high entropy defines ‘after’ would alert this mechanism, leading to measurable behavioral effects, namely a disruption in duration perception. To test this hypothesis, participants were shown briefly-presented (1000 ms or 500 ms) computer-generated visual dynamic events: novel 3D shapes that were seen either to evolve from whole figures into parts (low to high entropy condition) or were seen in the reverse direction: parts that coalesced into whole figures (high to low entropy condition). On each trial, participants were instructed to reproduce the duration of their visual experience of the stimulus by pressing and releasing the space bar. To ensure that attention was being deployed to the stimuli, a secondary task was to report the direction of the visual event (forward or reverse motion). Participants completed 60 trials. As predicted, we found that duration reproduction was significantly longer for the high to low entropy condition compared to the low to high entropy condition (p=.03). This preliminary data suggests the presence of a neural mechanism that detects entropy, which is used by other processes to construct our perception of the direction of time or time’s arrow.

Keywords: time perception, entropy, temporal illusions, duration perception

Procedia PDF Downloads 156
20298 Gaze Patterns of Skilled and Unskilled Sight Readers Focusing on the Cognitive Processes Involved in Reading Key and Time Signatures

Authors: J. F. Viljoen, Catherine Foxcroft

Abstract:

Expert sight readers rely on their ability to recognize patterns in scores, their inner hearing and prediction skills in order to perform complex sight reading exercises. They also have the ability to observe deviations from expected patterns in musical scores. This increases the “Eye-hand span” (reading ahead of the point of playing) in order to process the elements in the score. The study aims to investigate the gaze patterns of expert and non-expert sight readers focusing on key and time signatures. 20 musicians were tasked with playing 12 sight reading examples composed for one hand and five examples composed for two hands to be performed on a piano keyboard. These examples were composed in different keys and time signatures and included accidentals and changes of time signature to test this theory. Results showed that the experts fixate more and for longer on key and time signatures as well as deviations in examples for two hands than the non-expert group. The inverse was true for the examples for one hand, where expert sight readers showed fewer and shorter fixations on key and time signatures as well as deviations. This seems to suggest that experts focus more on the key and time signatures as well as deviations in complex scores to facilitate sight reading. The examples written for one appeared to be too easy for the expert sight readers, compromising gaze patterns.

Keywords: cognition, eye tracking, musical notation, sight reading

Procedia PDF Downloads 130
20297 Uncommon Presentation of Iscahemic Heart Disease with Sheehan’s Syndrome at Mid-Level Private Hospital of Bangladesh and Its Management- A Case Report

Authors: Nazmul Haque, Syeda Tasnuva Maria

Abstract:

Sheehan's Syndrome (SS), also known as postpartum hypopituitarism, is a rare but potentially serious condition resulting from ischemic necrosis of the pituitary gland, often occurring during or after childbirth. This syndrome is characterized by hypopituitarism, leading to deficiencies in various hormones produced by the pituitary gland. The primary cause is typically severe postpartum hemorrhage, leading to inadequate blood supply and subsequent necrosis of the pituitary tissue. This chronic hypopituitarism sometimes plays the role of premature atherosclerosis, which may lead to cardiovascular disease. This abstract provides a comprehensive overview of Sheehan's Syndrome with ischaemic heart disease, encompassing its pathophysiology, clinical manifestations, and current management strategies. The disorder presents a wide spectrum of symptoms, including chest pain, fatigue, amenorrhea, lactation failure, hypothyroidism, and adrenal insufficiency. Timely diagnosis is crucial, as delayed recognition can lead to complications and long-term health consequences. We herein report a patient complaining of chronic fatigue symptoms, aggressiveness, chest pain, and breathlessness with repeated LOC that were diagnosed with SS with IHD. The patient was treated with antiplatelet, antianginal, steroids, and hormone replacement with marked improvement in his overall condition.

Keywords: ischaemic heart disease, Sheehan's syndrome, post-partum haemorrhage, pituitary gland

Procedia PDF Downloads 41
20296 Volarization of Sugarcane Bagasse: The Effect of Alkali Concentration, Soaking Time and Temperature on Fibre Yield

Authors: Tamrat Tesfaye, Tilahun Seyoum, K. Shabaridharan

Abstract:

The objective of this paper was to determine the effect of NaOH concentration, soaking time, soaking temperature and their interaction on percentage yield of fibre extract using Response Surface Methodology (RSM). A Box-Behnken design was employed to optimize the extraction process of cellulosic fibre from sugar cane by-product bagasse using low alkaline extraction technique. The quadratic model with the optimal technological conditions resulted in a maximum fibre yield of 56.80% at 0.55N NaOH concentration, 4 h steeping time and 60ᵒC soaking temperature. Among the independent variables concentration was found to be the most significant (P < 0.005) variable and the interaction effect of concentration and soaking time leads to securing the optimized processes.

Keywords: sugarcane bagasse, low alkaline, Box-Behnken, fibre

Procedia PDF Downloads 239
20295 Entrepreneurial Leadership in Malaysian Public University: Competency and Behavior in the Face of Institutional Adversity

Authors: Noorlizawati Abd Rahim, Zainai Mohamed, Zaidatun Tasir, Astuty Amrin, Haliyana Khalid, Nina Diana Nawi

Abstract:

Entrepreneurial leaders have been sought as in-demand talents to lead profit-driven organizations during turbulent and unprecedented times. However, research regarding the pertinence of their roles in the public sector has been limited. This paper examined the characteristics of the challenging experiences encountered by senior leaders in public universities that require them to embrace entrepreneurialism in their leadership. Through a focus group interview with five Malaysian university top senior leaders with experience being Vice-Chancellor, we explored and developed a framework of institutional adversity characteristics and exemplary entrepreneurial leadership competency in the face of adversity. Complexity of diverse stakeholders, multiplicity of academic disciplines, unfamiliarity to lead different and broader roles, leading new directions, and creating change in high velocity and uncertain environment are among the dimensions that characterise institutional adversities. Our findings revealed that learning agility, opportunity recognition capacity, and bridging capability are among the characteristics of entrepreneurial university leaders. The findings reinforced that the presence of specific attributes in institutional adversity and experiences in overcoming those challenges may contribute to the development of entrepreneurial leadership capabilities.

Keywords: bridging capability, entrepreneurial leadership, leadership development, learning agility, opportunity recognition, university leaders

Procedia PDF Downloads 103
20294 Improving Automotive Efficiency through Lean Management Tools: A Case Study

Authors: Raed El-Khalil, Hussein Zeaiter

Abstract:

Managing and improving efficiency in the current highly competitive global automotive industry demands that companies adopt leaner and more flexible systems. During the past 20 years the domestic automotive industry in North America has been focusing on establishing new management strategies in order to meet market demands. 98The lean management process also known as Toyota Manufacturing Process (TPS) or lean manufacturing encompasses tools and techniques that were established in order to provide the best quality product with the fastest lead time at the lowest cost. The following paper presents a study that focused on improving labor efficiency at one of the Big Three (Ford, GM, Chrysler LLC) domestic automotive facility in North America. The objective of the study was to utilize several lean management tools in order to optimize the efficiency and utilization levels at the “Pre-Marriage” chassis area in a truck manufacturing and assembly facility. Utilizing three different lean tools (i.e. Standardization of work, 7 Wastes, and 5S) this research was able to improve efficiency by 51%, utilization by 246%, and reduce operations by 14%. The return on investment calculated based on the improvements made was 284%.

Keywords: lean manufacturing, standardized work, operation efficiency, utilization

Procedia PDF Downloads 501
20293 Effects of Plasma Treatment on Seed Germination

Authors: Yong Ho Jeon, Youn Mi Lee, Yong Yoon Lee

Abstract:

Effects of cold plasma treatment on various plant seed germination were studied. The seeds of hot pepper, cucumber, tomato and arabidopsis were exposed to plasma and the plasma was generated in various devices. The germination speed was evaluated compared to an unexposed control. A positive effect on germination speed was observed in all tested seeds but the effects strongly depended on the type of the used plasma device (Argon-DBD, surface-DBD or MARX generator), time of exposure (6s~10min or 1~10shots) and kind of seeds. The SEM images showed that arrays of gold particles along the cell wall were observed on the surface of cucumber seeds showed a germination-accelerating effect by plasma treatment, which was the same as untreated. However, when treated with the high dose plasma, gold particles were not arrayed at the seed surface, it seems that due to the surface etching. This may suggest that the germination is not promoted by etching or damage of surface caused by the plasma treatment. Seedling growth improvement was also observed by indirect plasma treatment. These lead to an important conclusion that the effect of charged particles on plasma play the essential role in plant germination and indirect plasma treatment offers new perspectives for large scale application.

Keywords: cold plasma, cucumber, germination, SEM

Procedia PDF Downloads 299
20292 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 216
20291 Intelligent Rainwater Reuse System for Irrigation

Authors: Maria M. S. Pires, Andre F. X. Gloria, Pedro J. A. Sebastiao

Abstract:

The technological advances in the area of Internet of Things have been creating more and more solutions in the area of agriculture. These solutions are quite important for life, as they lead to the saving of the most precious resource, water, being this need to save water a concern worldwide. The paper proposes the creation of an Internet of Things system based on a network of sensors and interconnected actuators that automatically monitors the quality of the rainwater that is stored inside a tank in order to be used for irrigation. The main objective is to promote sustainability by reusing rainwater for irrigation systems instead of water that is usually available for other functions, such as other productions or even domestic tasks. A mobile application was developed for Android so that the user can control and monitor his system in real time. In the application, it is possible to visualize the data that translate the quality of the water inserted in the tank, as well as perform some actions on the implemented actuators, such as start/stop the irrigation system and pour the water in case of poor water quality. The implemented system translates a simple solution with a high level of efficiency and tests and results obtained within the possible environment.

Keywords: internet of things, irrigation system, wireless sensor and actuator network, ESP32, sustainability, water reuse, water efficiency

Procedia PDF Downloads 141
20290 Detection and Quantification of Viable but Not Culturable Vibrio Parahaemolyticus in Frozen Bivalve Molluscs

Authors: Eleonora Di Salvo, Antonio Panebianco, Graziella Ziino

Abstract:

Background: Vibrio parahaemolyticus is a human pathogen that is widely distributed in marine environments. It is frequently isolated from raw seafood, particularly shellfish. Consumption of raw or undercooked seafood contaminated with V. parahaemolyticus may lead to acute gastroenteritis. Vibrio spp. has excellent resistance to low temperatures so it can be found in frozen products for a long time. Recently, the viable but non-culturable state (VBNC) of bacteria has attracted great attention, and more than 85 species of bacteria have been demonstrated to be capable of entering this state. VBNC cells cannot grow in conventional culture medium but are viable and maintain metabolic activity, which may constitute an unrecognized source of food contamination and infection. Also V. parahaemolyticus could exist in VBNC state under nutrient starvation or low-temperature conditions. Aim: The aim of the present study was to optimize methods and investigate V. parahaemolyticus VBNC cells and their presence in frozen bivalve molluscs, regularly marketed. Materials and Methods: propidium monoazide (PMA) was integrated with real-time polymerase chain reaction (qPCR) targeting the tl gene to detect and quantify V. parahaemolyticus in the VBNC state. PMA-qPCR resulted highly specific to V. parahaemolyticus with a limit of detection (LOD) of 10-1 log CFU/mL in pure bacterial culture. A standard curve for V. parahaemolyticus cell concentrations was established with the correlation coefficient of 0.9999 at the linear range of 1.0 to 8.0 log CFU/mL. A total of 77 samples of frozen bivalve molluscs (35 mussels; 42 clams) were subsequently subjected to the qualitative (on alkaline phosphate buffer solution) and quantitative research of V. parahaemolyticus on thiosulfate-citrate-bile salts-sucrose (TCBS) agar (DIFCO) NaCl 2.5%, and incubation at 30°C for 24-48 hours. Real-time PCR was conducted on homogenate samples, in duplicate, with and without propidium monoazide (PMA) dye, and exposed for 45 min under halogen lights (650 W). Total DNA was extracted from cell suspension in homogenate samples according to bolliture protocol. The Real-time PCR was conducted with species-specific primers for V. parahaemolitycus. The RT-PCR was performed in a final volume of 20 µL, containing 10 µL of SYBR Green Mixture (Applied Biosystems), 2 µL of template DNA, 2 µL of each primer (final concentration 0.6 mM), and H2O 4 µL. The qPCR was carried out on CFX96 TouchTM (Bio-Rad, USA). Results: All samples were negative both to the quantitative and qualitative detection of V. parahaemolyticus by the classical culturing technique. The PMA-qPCR let us individuating VBNC V. parahaemolyticus in the 20,78% of the samples evaluated with a value between the Log 10-1 and Log 10-3 CFU/g. Only clams samples were positive for PMA-qPCR detection. Conclusion: The present research is the first evaluating PMA-qPCR assay for detection of VBNC V. parahaemolyticus in bivalve molluscs samples, and the used method was applicable to the rapid control of marketed bivalve molluscs. We strongly recommend to use of PMA-qPCR in order to identify VBNC forms, undetectable by the classic microbiological methods. A precise knowledge of the V.parahaemolyticus in a VBNC form is fundamental for the correct risk assessment not only in bivalve molluscs but also in other seafood.

Keywords: food safety, frozen bivalve molluscs, PMA dye, Real-time PCR, VBNC state, Vibrio parahaemolyticus

Procedia PDF Downloads 125
20289 The Optimal Order Policy for the Newsvendor Model under Worker Learning

Authors: Sunantha Teyarachakul

Abstract:

We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.

Keywords: inventory management, Newsvendor model, order policy, worker learning

Procedia PDF Downloads 406
20288 Numerical Experiments for the Purpose of Studying Space-Time Evolution of Various Forms of Pulse Signals in the Collisional Cold Plasma

Authors: N. Kh. Gomidze, I. N. Jabnidze, K. A. Makharadze

Abstract:

The influence of inhomogeneities of plasma and statistical characteristics on the propagation of signal is very actual in wireless communication systems. While propagating in the media, the deformation and evaluation of the signal in time and space take place and on the receiver we get a deformed signal. The present article is dedicated to studying the space-time evolution of rectangular, sinusoidal, exponential and bi-exponential impulses via numerical experiment in the collisional, cold plasma. The presented method is not based on the Fourier-presentation of the signal. Analytically, we have received the general image depicting the space-time evolution of the radio impulse amplitude that gives an opportunity to analyze the concrete results in the case of primary impulse.

Keywords: collisional, cold plasma, rectangular pulse signal, impulse envelope

Procedia PDF Downloads 376
20287 Handy EKG: Low-Cost ECG For Primary Care Screening In Developing Countries

Authors: Jhiamluka Zservando Solano Velasquez, Raul Palma, Alejandro Calderon, Servio Paguada, Erick Marin, Kellyn Funes, Hana Sandoval, Oscar Hernandez

Abstract:

Background: Screening cardiac conditions in primary care in developing countries can be challenging, and Honduras is not the exception. One of the main limitations is the underfunding of the Healthcare System in general, causing conventional ECG acquisition to become a secondary priority. Objective: Development of a low-cost ECG to improve screening of arrhythmias in primary care and communication with a specialist in secondary and tertiary care. Methods: Design a portable, pocket-size low-cost 3 lead ECG (Handy EKG). The device is autonomous and has Wi-Fi/Bluetooth connectivity options. A mobile app was designed which can access online servers with machine learning, a subset of artificial intelligence to learn from the data and aid clinicians in their interpretation of readings. Additionally, the device would use the online servers to transfer patient’s data and readings to a specialist in secondary and tertiary care. 50 randomized patients volunteer to participate to test the device. The patients had no previous cardiac-related conditions, and readings were taken. One reading was performed with the conventional ECG and 3 readings with the Handy EKG using different lead positions. This project was possible thanks to the funding provided by the National Autonomous University of Honduras. Results: Preliminary results show that the Handy EKG performs readings of the cardiac activity similar to those of a conventional electrocardiograph in lead I, II, and III depending on the position of the leads at a lower cost. The wave and segment duration, amplitude, and morphology of the readings were similar to the conventional ECG, and interpretation was possible to conclude whether there was an arrhythmia or not. Two cases of prolonged PR segment were found in both ECG device readings. Conclusion: Using a Frugal innovation approach can allow lower income countries to develop innovative medical devices such as the Handy EKG to fulfill unmet needs at lower prices without compromising effectiveness, safety, and quality. The Handy EKG provides a solution for primary care screening at a much lower cost and allows for convenient storage of the readings in online servers where clinical data of patients can then be accessed remotely by Cardiology specialists.

Keywords: low-cost hardware, portable electrocardiograph, prototype, remote healthcare

Procedia PDF Downloads 171
20286 An Engineering Application of the H-P Version of the Finite Element Method on Vibration Behavior of Rotors

Authors: Hadjoui Abdelhamid, Saimi Ahmed

Abstract:

The hybrid h-p finite element method for the dynamic behavior of nonlinear rotors is described in this paper. The standard h-version method of discretizing the problem is retained, but modified to allow the use of polynomially-enriched beam elements. A hierarchically enriching element will thus not affect the nodal displacement and rotation, but will influence the values of the nodal bending moment and shear force is used. The deterministic movements of rotation and translation of the support which are coupled to the excitations due to unbalance are also taken into account. We study also the geometric dissymmetry of the shaft and the disc, thus the equations of motion of the rotor contain variable parametric coefficients over time that can lead to a lateral dynamic instability. The effects of movements combined support for bearings are analyzed and discussed through Campbell diagrams and spectral analyses. A program is made in Matlab. After validation of the program, several examples are studied. The influence of physical and geometric parameters on the natural frequencies of the shaft is determined through the study of these examples. Among these parameters, we include the variation in the diameter and the thickness of the rotor, the position of the disc.

Keywords: Campbell diagram, critical speeds, nonlinear rotor, version h-p of FEM

Procedia PDF Downloads 225
20285 Investigating the Atmospheric Phase Distribution of Inorganic Reactive Nitrogen Species along the Urban Transect of Indo Gangetic Plains

Authors: Reema Tiwari, U. C. Kulshrestha

Abstract:

As a key regulator of atmospheric oxidative capacity and secondary aerosol formations, the signatures of reactive nitrogen (Nr) emissions are becoming increasingly evident in the cascade of air pollution, acidification, and eutrophication of the ecosystem. However, their accurate estimates in N budget remains limited by the photochemical conversion processes where occurrence of differential atmospheric residence time of gaseous (NOₓ, HNO₃, NH₃) and particulate (NO₃⁻, NH₄⁺) Nr species becomes imperative to their spatio temporal evolution on a synoptic scale. The present study attempts to quantify such interactions under tropical conditions when low anticyclonic winds become favorable to the advections from west during winters. For this purpose, a diurnal sampling was conducted using low volume sampler assembly where ambient concentrations of Nr trace gases along with their ionic fractions in the aerosol samples were determined with UV-spectrophotometer and ion chromatography respectively. The results showed a spatial gradient of the gaseous precursors with a much pronounced inter site variability (p < 0.05) than their particulate fractions. Such observations were confirmed for their limited photochemical conversions where less than 1 ratios of day and night measurements (D/N) for the different Nr fractions suggested an influence of boundary layer dynamics at the background site. These phase conversion processes were further corroborated with the molar ratios of NOₓ/NOᵧ and NH₃/NHₓ where incomplete titrations of NOₓ and NH₃ emissions were observed irrespective of their diurnal phases along the sampling transect. Their calculations with equilibrium based approaches for an NH₃-HNO₃-NH₄NO₃ system, on the other hand, were characterized by delays in equilibrium attainment where plots of their below deliquescence Kₘ and Kₚ values with 1000/T confirmed the role of lower temperature ranges in NH₄NO₃ aerosol formation. These results would help us in not only resolving the changing atmospheric inputs of reduced (NH₃, NH₄⁺) and oxidized (NOₓ, HNO₃, NO₃⁻) Nr estimates but also in understanding the dependence of Nr mixing ratios on their local meteorological conditions.

Keywords: diurnal ratios, gas-aerosol interactions, spatial gradient, thermodynamic equilibrium

Procedia PDF Downloads 121
20284 On Flexible Preferences for Standard Taxis, Electric Taxis, and Peer-to-Peer Ridesharing

Authors: Ricardo Daziano

Abstract:

In the analysis and planning of the mobility ecosystem, preferences for ride-hailing over incumbent street-hailing services need better understanding. In this paper, a seminonparametric discrete choice model that allows for flexible preference heterogeneity is fitted with data from a discrete choice experiment among adult commuters in Montreal, Canada (N=760). Participants chose among Uber, Teo (a local electric ride-hailing service that was in operation when data was collected in 2018), and a standard taxi when presented with information about cost, time (on-trip, waiting, walking), powertrain of the car (gasoline/hybrid) for Uber and taxi, and whether the available electric Teo was a Tesla (which was one of the actual features of the Teo fleet). The fitted flexible model offers several behavioral insights. Waiting time for ride-hailing services is associated with a statistically significant but low marginal disutility. For other time components, including on-ride, and street-hailing waiting and walking the estimates of the value of time show an interesting pattern: whereas in a conditional logit on-ride time reductions are valued higher, in the flexible LML specification means of the value of time follow the expected pattern of waiting and walking creating a higher disutility. At the same time, the LML estimates show the presence of important, multimodal unobserved preference heterogeneity.

Keywords: discrete choice, electric taxis, ridehailing, semiparametrics

Procedia PDF Downloads 153
20283 Analyses and Optimization of Physical and Mechanical Properties of Direct Recycled Aluminium Alloy (AA6061) Wastes by ANOVA Approach

Authors: Mohammed H. Rady, Mohd Sukri Mustapa, S Shamsudin, M. A. Lajis, A. Wagiman

Abstract:

The present study is aimed at investigating microhardness and density of aluminium alloy chips when subjected to various settings of preheating temperature and preheating time. Three values of preheating temperature were taken as 450 °C, 500 °C, and 550 °C. On the other hand, three values of preheating time were chosen (1, 2, 3) hours. The influences of the process parameters (preheating temperature and time) were analyzed using Design of Experiments (DOE) approach whereby full factorial design with center point analysis was adopted. The total runs were 11 and they comprise of two factors of full factorial design with 3 center points. The responses were microhardness and density. The results showed that the density and microhardness increased with decreasing the preheating temperature. The results also found that the preheating temperature is more important to be controlled rather than the preheating time in microhardness analysis while both the preheating temperature and preheating time are important in density analysis. It can be concluded that setting temperature at 450 °C for 1 hour resulted in the optimum responses.

Keywords: AA6061, density, DOE, hot extrusion, microhardness

Procedia PDF Downloads 344
20282 Driver Take-Over Time When Resuming Control from Highly Automated Driving in Truck Platooning Scenarios

Authors: Bo Zhang, Ellen S. Wilschut, Dehlia M. C. Willemsen, Marieke H. Martens

Abstract:

With the rapid development of intelligent transportation systems, automated platooning of trucks is drawing increasing interest for its beneficial effects on safety, energy consumption and traffic flow efficiency. Nevertheless, one major challenge lies in the safe transition of control from the automated system back to the human drivers, especially when they have been inattentive after a long period of highly automated driving. In this study, we investigated driver take-over time after a system initiated request to leave the platooning system Virtual Tow Bar in a non-critical scenario. 22 professional truck drivers participated in the truck driving simulator experiment, and each was instructed to drive under three experimental conditions before the presentation of the take-over request (TOR): driver ready (drivers were instructed to monitor the road constantly), driver not-ready (drivers were provided with a tablet) and eye-shut. The results showed significantly longer take-over time in both driver not-ready and eye-shut conditions compared with the driver ready condition. Further analysis revealed hand movement time as the main factor causing long response time in the driver not-ready condition, while in the eye-shut condition, gaze reaction time also influenced the total take-over time largely. In addition to comparing the means, large individual differences can be found especially in two driver, not attentive conditions. The importance of a personalized driver readiness predictor for a safe transition is concluded.

Keywords: driving simulation, highly automated driving, take-over time, transition of control, truck platooning

Procedia PDF Downloads 243
20281 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 78