Search results for: loading factor performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18003

Search results for: loading factor performance

2673 Localization of Geospatial Events and Hoax Prediction in the UFO Database

Authors: Harish Krishnamurthy, Anna Lafontant, Ren Yi

Abstract:

Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.

Keywords: time-series clustering, feature extraction, hoax prediction, geospatial events

Procedia PDF Downloads 366
2672 Design of Effective Decoupling Point in Build-To-Order Systems: Focusing on Trade-Off Relation between Order-To-Delivery Lead Time and Work in Progress

Authors: Zhiyong Li, Hiroshi Katayama

Abstract:

Since 1990s, e-commerce and internet business have been grown gradually over the word and customers tend to express their demand attributes in terms of specification requirement on parts, component, product structure etc. This paper deals with designing effective decoupling points for build to order systems under e-commerce environment, which can be realized through tradeoff relation analysis between two major criteria, customer order lead time and value of work in progress. These KPIs are critical for successful BTO business, namely time-based service effectiveness on coping with customer requirements for the first issue and cost effective ness with risk aversive operations for the second issue. Approach of this paper consists of investigation of successful business standing for BTO scheme, manufacturing model development of this scheme, quantitative evaluation of proposed models by calculation of two KPI values under various decoupling point distributions and discussion of the results brought by pattern of decoupling point distribution, where some cases provide the pareto optimum performances. To extract the relevant trade-off relation between considered KPIs among 2-dimensional resultant performance, useful logic developed by former research work, i.e. Katayama and Fonseca, is applied. Obtained characteristics are evaluated as effective information for managing BTO manufacturing businesses.

Keywords: build-to-order (BTO), decoupling point, e-commerce, order-to-delivery lead time (ODLT), work in progress (WIP)

Procedia PDF Downloads 317
2671 Development of a Roadmap for Assessment the Sustainability of Buildings in Saudi Arabia Using Building Information Modeling

Authors: Ibrahim A. Al-Sulaihi, Khalid S. Al-Gahtani, Abdullah M. Al-Sugair, Aref A. Abadel

Abstract:

Achieving environmental sustainability is one of the important issues considered in many countries’ vision. Green/Sustainable building is widely used terminology for describing a friendly environmental construction. Applying sustainable practices has a significant importance in various fields, including construction field that consumes an enormous amount of resource and causes a considerable amount of waste. The need for sustainability is increased in the regions that suffering from the limitation of natural resource and extreme weather conditions such as Saudi Arabia. Since buildings designs are getting sophisticated, the need for tools, which support decision-making for sustainability issues, is increasing, especially in the design and preconstruction stages. In this context, Building Information Modeling (BIM) can aid in performing complex building performance analyses to ensure an optimized sustainable building design. Accordingly, this paper introduces a roadmap towards developing a systematic approach for presenting the sustainability of buildings using BIM. The approach includes set of main processes including; identifying the sustainability parameters that can be used for sustainability assessment in Saudi Arabia, developing sustainability assessment method that fits the special circumstances in the Kingdom, identifying the sustainability requirements and BIM functions that can be used for satisfying these requirements, and integrating these requirements with identified functions. As a result, the sustainability-BIM approach can be developed which helps designers in assessing the sustainability and exploring different design alternatives at the early stage of the construction project.

Keywords: green buildings, sustainability, BIM, rating systems, environment, Saudi Arabia

Procedia PDF Downloads 372
2670 Towards the Effectiveness/ Performance of Spatial Communication within the Composite Interior Spaces: Wayfinding System in the Saudi National Museum as a Case Study

Authors: Afnan T. Bagasi, Donia M. Bettaieb, Abeer Alsobahi

Abstract:

The wayfinding system is related to the course of the museum journey for visitors directly and indirectly. The design aspects of this system play an important role, making it an effective and communication system within the museum space. However, translating the concepts that pertain to its design, such as Intelligibility that is based on integration and connectivity in museum space design, needs more customization in the form of specific design considerations with reference to the most important approaches. Those approaches link the organizational and practical aspects to the semiotic and semantic aspects related to the space syntax by targeting the visual and perceived consistency of visitors. In this context, the study aims to identify how to apply the concept of intelligibility and clarity by employing integration and connectivity to design a wayfinding system in museums as a kind of composite interior space. Using the available plans and images to extrapolate the design considerations used to design the wayfinding system in the Saudi National Museum as a case study, a descriptive-analytical method was used to understand the basic organizational and morphological principles of the museum space through four main aspects in space design: morphological, semantic, semiotic, and pragmatic. The study's findings will assist designers, professionals, and researchers in the field of museum design in understanding the significance of the wayfinding system by delving into it through museum spaces by highlighting the essential aspects using a clear analytical method.

Keywords: wayfinding system, museum journey, intelligibility, integration, connectivity

Procedia PDF Downloads 162
2669 Micro-Rest: Extremely Short Breaks in Post-Learning Interference Support Memory Retention over the Long Term

Authors: R. Marhenke, M. Martini

Abstract:

The distraction of attentional resources after learning hinders long-term memory consolidation compared to several minutes of post-encoding inactivity in form of wakeful resting. We tested whether an 8-minute period of wakeful resting, compared to performing an adapted version of the d2 test of attention after learning, supports memory retention. Participants encoded and immediately recalled a word list followed by either an 8 minute period of wakeful resting (eyes closed, relaxed) or by performing an adapted version of the d2 test of attention (scanning and selecting specific characters while ignoring others). At the end of the experimental session (after 12-24 min) and again after 7 days, participants were required to complete a surprise free recall test of both word lists. Our results showed no significant difference in memory retention between the experimental conditions. However, we found that participants who completed the first lines of the d2 test in less than the given time limit of 20 seconds and thus had short unfilled intervals before switching to the next test line, remembered more words over the 12-24 minute and over the 7 days retention interval than participants who did not complete the first lines. This interaction occurred only for the first test lines, with the highest temporal proximity to the encoding task and not for later test lines. Differences in retention scores between groups (completed first line vs. did not complete) seem to be widely independent of the general performance in the d2 test. Implications and limitations of these exploratory findings are discussed.

Keywords: long-term memory, retroactive interference, attention, forgetting

Procedia PDF Downloads 121
2668 Agile Smartphone Porting and App Integration of Signal Processing Algorithms Obtained through Rapid Development

Authors: Marvin Chibuzo Offiah, Susanne Rosenthal, Markus Borschbach

Abstract:

Certain research projects in Computer Science often involve research on existing signal processing algorithms and developing improvements on them. Research budgets are usually limited, hence there is limited time for implementing the algorithms from scratch. It is therefore common practice, to use implementations provided by other researchers as a template. These are most commonly provided in a rapid development, i.e. 4th generation, programming language, usually Matlab. Rapid development is a common method in Computer Science research for quickly implementing and testing new developed algorithms, which is also a common task within agile project organization. The growing relevance of mobile devices in the computer market also gives rise to the need to demonstrate the successful executability and performance measurement of these algorithms on a mobile device operating system and processor, particularly on a smartphone. Open mobile systems such as Android, are most suitable for this task, which is to be performed most efficiently. Furthermore, efficiently implementing an interaction between the algorithm and a graphical user interface (GUI) that runs exclusively on the mobile device is necessary in cases where the project’s goal statement also includes such a task. This paper examines different proposed solutions for porting computer algorithms obtained through rapid development into a GUI-based smartphone Android app and evaluates their feasibilities. Accordingly, the feasible methods are tested and a short success report is given for each tested method.

Keywords: SMARTNAVI, Smartphone, App, Programming languages, Rapid Development, MATLAB, Octave, C/C++, Java, Android, NDK, SDK, Linux, Ubuntu, Emulation, GUI

Procedia PDF Downloads 474
2667 A Comprehensive Review of Artificial Intelligence Applications in Sustainable Building

Authors: Yazan Al-Kofahi, Jamal Alqawasmi.

Abstract:

In this study, a comprehensive literature review (SLR) was conducted, with the main goal of assessing the existing literature about how artificial intelligence (AI), machine learning (ML), deep learning (DL) models are used in sustainable architecture applications and issues including thermal comfort satisfaction, energy efficiency, cost prediction and many others issues. For this reason, the search strategy was initiated by using different databases, including Scopus, Springer and Google Scholar. The inclusion criteria were used by two research strings related to DL, ML and sustainable architecture. Moreover, the timeframe for the inclusion of the papers was open, even though most of the papers were conducted in the previous four years. As a paper filtration strategy, conferences and books were excluded from database search results. Using these inclusion and exclusion criteria, the search was conducted, and a sample of 59 papers was selected as the final included papers in the analysis. The data extraction phase was basically to extract the needed data from these papers, which were analyzed and correlated. The results of this SLR showed that there are many applications of ML and DL in Sustainable buildings, and that this topic is currently trendy. It was found that most of the papers focused their discussions on addressing Environmental Sustainability issues and factors using machine learning predictive models, with a particular emphasis on the use of Decision Tree algorithms. Moreover, it was found that the Random Forest repressor demonstrates strong performance across all feature selection groups in terms of cost prediction of the building as a machine-learning predictive model.

Keywords: machine learning, deep learning, artificial intelligence, sustainable building

Procedia PDF Downloads 55
2666 Chatbots as Language Teaching Tools for L2 English Learners

Authors: Feiying Wu

Abstract:

Chatbots are computer programs that attempt to engage a human in a dialogue, which originated in the 1960s with MIT's Eliza. However, they have become widespread more recently as advances in language technology have produced chatbots with increasing linguistic quality and sophistication, leading to their potential to serve as a tool for Computer-Assisted Language Learning(CALL). The aim of this article is to assess the feasibility of using two chatbots, Mitsuku and CleverBot, as pedagogical tools for learning English as a second language by stimulating L2 learners with distinct English proficiencies. Speaking of the input of stimulated learners, they are measured by AntWordProfiler to match the user's expected vocabulary proficiency. Totally, there are four chat sessions as each chatbot will converse with both beginners and advanced learners. For evaluation, it focuses on chatbots' responses from a linguistic standpoint, encompassing vocabulary and sentence levels. The vocabulary level is determined by the vocabulary range and the reaction to misspelled words. Grammatical accuracy and responsiveness to poorly formed sentences are assessed for the sentence level. In addition, the assessment of this essay sets 25% lexical and grammatical incorrect input to determine chatbots' corrective ability towards different linguistic forms. Based on statistical evidence and illustration of examples, despite the small sample size, neither Mitsuku nor CleverBot is ideal as educational tools based on their performance through word range, grammatical accuracy, topic range, and corrective feedback for incorrect words and sentences, but rather as a conversational tool for beginners of L2 English.

Keywords: chatbots, CALL, L2, corrective feedback

Procedia PDF Downloads 67
2665 Implementation of a Program of Orientation for Travel Nursing Staff Based on Nurse-Identified Learning Needs

Authors: Olga C. Rodrigue

Abstract:

Long-term care and skilled nursing facilities experience ebbs and flows of nursing staffing, a problem compounded by the perception of the facilities as undesirable workplaces and competition for staff from other healthcare entities. Travel nurses are contracted to fill staffing needs due to increased admissions, increased and unexpected attrition of nurses, or facility expansion of services. Prior to beginning the contracted assignment, the travel nurse must meet industry, company, and regulatory requirements (The Joint Commission and CMS) for skills and knowledge. Travel nurses, however, inconsistently receive the pre-assignment orientation needed to work at the contracted facility, if any information is given at all. When performance expectations are not met, travel nurses may subsequently choose to leave the position without completing the terms of the contract, and some facilities may choose to terminate the contract prior to the expected end date. The overarching goal of the Doctor of Nursing Practice evidence-based practice improvement project is to provide travel nurses with the basic and necessary information to prepare them to begin a long-term and skilled nursing assignment. The project involves the identification of travel nurse learning needs through a survey and the development and provision of web-based learning modules to address those needs prior to arrival for a long-term and skilled nursing assignment.

Keywords: nurse staffing, travel nurse, travel staff, contract staff, contracted assignment, long-term care, skilled nursing, onboarding, orientation, staff development, supplemental staff

Procedia PDF Downloads 155
2664 Testing of Protective Coatings on Automotive Steel, a Correlation Between Salt Spray, Electrochemical Impedance Spectroscopy, and Linear Polarization Resistance Test

Authors: Dhanashree Aole, V. Hariharan, Swati Surushe

Abstract:

Corrosion can cause serious and expensive damage to the automobile components. Various proven techniques for controlling and preventing corrosion depend on the specific material to be protected. Electrochemical Impedance Spectroscopy (EIS) and salt spray tests are commonly used to assess the corrosion degradation mechanism of coatings on metallic surfaces. While, the only test which monitors the corrosion rate in real time is known as Linear Polarisation Resistance (LPR). In this study, electrochemical tests (EIS & LPR) and spray test are reviewed to assess the corrosion resistance and durability of different coatings. The main objective of this study is to correlate the test results obtained using linear polarization resistance (LPR) and Electrochemical Impedance Spectroscopy (EIS) with the results obtained using standard salt spray test. Another objective of this work is to evaluate the performance of various coating systems- CED, Epoxy, Powder coating, Autophoretic, and Zn-trivalent coating for vehicle underbody application. The corrosion resistance coating are assessed. From this study, a promising correlation between different corrosion testing techniques is noted. The most profound observation is that electrochemical tests gives quick estimation of corrosion resistance and can detect the degradation of coatings well before visible signs of damage appear. Furthermore, the corrosion resistances and salt spray life of the coatings investigated were found to be according to the order as follows- CED> powder coating > Autophoretic > epoxy coating > Zn- Trivalent plating.

Keywords: Linear Polarization Resistance (LPR), Electrochemical Impedance Spectroscopy (EIS), salt spray test, sacrificial and barrier coatings

Procedia PDF Downloads 514
2663 Inflammatory and Cardio Hypertrophic Remodeling Biomarkers in Patients with Fabry Disease

Authors: Margarita Ivanova, Julia Dao, Andrew Friedman, Neil Kasaci, Rekha Gopal, Ozlem Goker-Alpan

Abstract:

In Fabry disease (FD), α-galactosidase A (α-Gal A) deficiency leads to the accumulation of globotriaosylceramide (Lyso-Gb3 and Gb3), triggering a pathologic cascade that causes the severity of organs damage. The heart is one of the several organs with high sensitivity to the α-Gal A deficiency. A subgroup of patients with significant residual of α-Gal A activity with primary cardiac involvement is occasionally referred to as “cardiac variant.” The cardiovascular complications are most frequently encountered, contributing substantially to morbidity, and are the leading cause of premature death in male and female patients with FD. The deposition of Lyso-Gb-3 and Gb-3 within the myocardium affects cardiac function with resultant progressive cardiovascular pathology. Gb-3 and Lyso-Gb-3 accumulation at the cellular level trigger a cascade of events leading to end-stage fibrosis. In the cardiac tissue, Lyso-Gb-3 deposition is associated with the increased release of inflammatory factors and transforming growth factors. Infiltration of lymphocytes and macrophages into endomyocardial tissue indicates that inflammation plays a significant role in cardiac damage. Moreover, accumulated data suggest that chronic inflammation leads to multisystemic FD pathology even under enzyme replacement therapy (ERT). NF-κB activation plays a subsequent role in the inflammatory response to cardiac dysfunction and advanced heart failure in the general population. TNFalpha/NF-κB signaling protects the myocardial evoking by ischemic preconditioning; however, this protective effect depends on the concentration of TNF-α. Thus, we hypothesize that TNF-α is a critical factor in determining the grade of cardio-pathology. Cardiac hypertrophy corresponds to the expansion of the coronary vasculature to maintain a sufficient supply of nutrients and oxygen. Coronary activation of angiogenesis and fibrosis plays a vital role in cardiac vascularization, hypertrophy, and tissue remodeling. We suggest that the interaction between the inflammatory pathways and cardiac vascularization is a bi-directional process controlled by secreted cytokines and growth factors. The co-coordination of these two processes has never been explored in FD. In a cohort of 40 patients with FD, biomarkers associated with inflammation and cardio hypertrophic remodeling were studied. FD patients were categorized into three groups based on LVmass/DSA, LVEF, and ECG abnormalities: FD with no cardio complication, FD with moderate cardio complication, and severe cardio complication. Serum levels of NF-kB, TNFalpha, Il-6, Il-2, MCP1, ING-gamma, VEGF, IGF-1, TGFβ, and FGF2 were quantified by enzyme-linked immunosorbent assays (ELISA). Among the biomarkers, MCP-1, INF-gamma, VEGF, TNF-alpha, and TGF-beta were elevated in FD patients. Some of these biomarkers also have the potential to correlate with cardio pathology in FD. Conclusion: The study provides information about the role of inflammatory pathways and biomarkers of cardio hypertrophic remodeling in FD patients. This study will also reveal the mechanisms that link intracellular accumulation of Lyso-GB-3 and Gb3 to the development of cardiomyopathy with myocardial thickening and resultant fibrosis.

Keywords: biomarkers, Fabry disease, inflammation, growth factors

Procedia PDF Downloads 75
2662 Phorbol 12-Myristate 13-Acetate (PMA)-Differentiated THP-1 Monocytes as a Validated Microglial-Like Model in Vitro

Authors: Amelia J. McFarland, Andrew K. Davey, Shailendra Anoopkumar-Dukie

Abstract:

Microglia are the resident macrophage population of the central nervous system (CNS), contributing to both innate and adaptive immune response, and brain homeostasis. Activation of microglia occurs in response to a multitude of pathogenic stimuli in their microenvironment; this induces morphological and functional changes, resulting in a state of acute neuroinflammation which facilitates injury resolution. Adequate microglial function is essential for the health of the neuroparenchyma, with microglial dysfunction implicated in numerous CNS pathologies. Given the critical role that these macrophage-derived cells play in CNS homeostasis, there is a high demand for microglial models suitable for use in neuroscience research. The isolation of primary human microglia, however, is both difficult and costly, with microglial activation an unwanted but inevitable result of the extraction process. Consequently, there is a need for the development of alternative experimental models which exhibit morphological, biochemical and functional characteristics of human microglia without the difficulties associated with primary cell lines. In this study, our aim was to evaluate whether THP-1 human peripheral blood monocytes would display microglial-like qualities following an induced differentiation, and, therefore, be suitable for use as surrogate microglia. To achieve this aim, THP-1 human peripheral blood monocytes from acute monocytic leukaemia were differentiated with a range of phorbol 12-myristate 13-acetate (PMA) concentrations (50-200 nM) using two different protocols: a 5-day continuous PMA exposure or a 3-day continuous PMA exposure followed by a 5-day rest in normal media. In each protocol and at each PMA concentration, microglial-like cell morphology was assessed through crystal violet staining and the presence of CD-14 microglial / macrophage cell surface marker. Lipopolysaccharide (LPS) from Escherichia coli (055: B5) was then added at a range of concentrations from 0-10 mcg/mL to activate the PMA-differentiated THP-1 cells. Functional microglial-like behavior was evaluated by quantifying the release of prostaglandin (PG)-E2 and pro-inflammatory cytokines interleukin (IL)-1β and tumour necrosis factor (TNF)-α using mediator-specific ELISAs. Furthermore, production of global reactive oxygen species (ROS) and nitric oxide (NO) were determined fluorometrically using dichlorodihydrofluorescein diacetate (DCFH-DA) and diaminofluorescein diacetate (DAF-2-DA) respectively. Following PMA-treatment, it was observed both differentiation protocols resulted in cells displaying distinct microglial morphology from 10 nM PMA. Activation of differentiated cells using LPS significantly augmented IL-1β, TNF-α and PGE2 release at all LPS concentrations under both differentiation protocols. Similarly, a significant increase in DCFH-DA and DAF-2-DA fluorescence was observed, indicative of increases in ROS and NO production. For all endpoints, the 5-day continuous PMA treatment protocol yielded significantly higher mediator levels than the 3-day treatment and 5-day rest protocol. Our data, therefore, suggests that the differentiation of THP-1 human monocyte cells with PMA yields a homogenous microglial-like population which, following stimulation with LPS, undergo activation to release a range of pro-inflammatory mediators associated with microglial activation. Thus, the use of PMA-differentiated THP-1 cells represents a suitable microglial model for in vitro research.

Keywords: differentiation, lipopolysaccharide, microglia, monocyte, neuroscience, THP-1

Procedia PDF Downloads 376
2661 A Study on the Treatment of Municipal Waste Water Using Sequencing Batch Reactor

Authors: Bhaven N. Tandel, Athira Rajeev

Abstract:

Sequencing batch reactor process is a suspended growth process operating under non-steady state conditions which utilizes a fill and draw reactor with complete mixing during the batch reaction step (after filling) and where the subsequent steps of aeration and clarification occur in the same tank. All sequencing batch reactor systems have five steps in common, which are carried out in sequence as follows, (1) fill (2) react (3) settle (sedimentation/clarification) (4) draw (decant) and (5) idle. The study was carried out in a sequencing batch reactor of dimensions 44cmx30cmx70cm with a working volume of 40 L. Mechanical stirrer of 100 rpm was used to provide continuous mixing in the react period and oxygen was supplied by fish tank aerators. The duration of a complete cycle of sequencing batch reactor was 8 hours. The cycle period was divided into different phases in sequence as follows-0.25 hours fill phase, 6 hours react period, 1 hour settling phase, 0.5 hours decant period and 0.25 hours idle phase. The study consisted of two runs, run 1 and run 2. Run 1 consisted of 6 hours aerobic react period and run 2 consisted of 3 hours aerobic react period followed by 3 hours anoxic react period. The influent wastewater used for the study had COD, BOD, NH3-N and TKN concentrations of 308.03±48.94 mg/L, 100.36±22.05 mg/L, 14.12±1.18 mg/L, and 24.72±2.21 mg/L respectively. Run 1 had an average COD removal efficiency of 41.28%, BOD removal efficiency of 56.25%, NH3-N removal efficiency of 86.19% and TKN removal efficiency of 54.4%. Run 2 had an average COD removal efficiency of 63.19%, BOD removal efficiency of 73.85%, NH3-N removal efficiency of 90.74% and TKN removal efficiency of 65.25%. It was observed that run 2 gave better performance than run 1 in the removal of COD, BOD and TKN.

Keywords: municipal waste water, aerobic, anoxic, sequencing batch reactor

Procedia PDF Downloads 533
2660 Unsupervised Segmentation Technique for Acute Leukemia Cells Using Clustering Algorithms

Authors: N. H. Harun, A. S. Abdul Nasir, M. Y. Mashor, R. Hassan

Abstract:

Leukaemia is a blood cancer disease that contributes to the increment of mortality rate in Malaysia each year. There are two main categories for leukaemia, which are acute and chronic leukaemia. The production and development of acute leukaemia cells occurs rapidly and uncontrollable. Therefore, if the identification of acute leukaemia cells could be done fast and effectively, proper treatment and medicine could be delivered. Due to the requirement of prompt and accurate diagnosis of leukaemia, the current study has proposed unsupervised pixel segmentation based on clustering algorithm in order to obtain a fully segmented abnormal white blood cell (blast) in acute leukaemia image. In order to obtain the segmented blast, the current study proposed three clustering algorithms which are k-means, fuzzy c-means and moving k-means algorithms have been applied on the saturation component image. Then, median filter and seeded region growing area extraction algorithms have been applied, to smooth the region of segmented blast and to remove the large unwanted regions from the image, respectively. Comparisons among the three clustering algorithms are made in order to measure the performance of each clustering algorithm on segmenting the blast area. Based on the good sensitivity value that has been obtained, the results indicate that moving k-means clustering algorithm has successfully produced the fully segmented blast region in acute leukaemia image. Hence, indicating that the resultant images could be helpful to haematologists for further analysis of acute leukaemia.

Keywords: acute leukaemia images, clustering algorithms, image segmentation, moving k-means

Procedia PDF Downloads 284
2659 Use of Magnetically Separable Molecular Imprinted Polymers for Determination of Pesticides in Food Samples

Authors: Sabir Khan, Sajjad Hussain, Ademar Wong, Maria Del Pilar Taboada Sotomayor

Abstract:

The present work aims to develop magnetic molecularly imprinted polymers (MMIPs) for determination of a selected pesticide (ametryne) using high-performance liquid chromatography (HPLC). Computational simulation can assist the choice of the most suitable monomer for the synthesis of polymers. The (MMIPs) were polymerized at the surface of Fe3O4@SiO2 magnetic nanoparticles (MNPs) using 2-vinylpyradine as functional monomer, ethylene-glycol-dimethacrylate (EGDMA) is a cross-linking agent and 2,2-Azobisisobutyronitrile (AIBN) used as radical initiator. Magnetic non-molecularly imprinted polymer (MNIPs) was also prepared under the same conditions without analyte. The MMIPs were characterized by scanning electron microscopy (SEM), Brunauer, Emmett and Teller (BET) and Fourier transform infrared spectroscopy (FTIR). Pseudo first-order and pseudo second order model were applied to study kinetics of adsorption and it was found that adsorption process followed the pseudo-first-order kinetic model. Adsorption equilibrium data was fitted to Freundlich and Langmuir isotherms and the sorption equilibrium process was well described by Langmuir isotherm mode. The selectivity coefficients (α) of MMIPs for ametryne with respect to atrazine, ciprofloxacin and folic acid were 4.28, 12.32 and 14.53 respectively. The spiked recoveries ranged between 91.33 and 106.80% were obtained. The results showed high affinity and selectivity of MMIPs for pesticide ametryne in the food samples.

Keywords: molecularly imprinted polymer, pesticides, magnetic nanoparticles, adsorption

Procedia PDF Downloads 459
2658 Determination of Performances of Some Mulberry (Morus spp.) Species Selected from Different Places of Turkey under Kahramanmaras Conditions

Authors: Muruvvet Ilgin, Ilknur Agca

Abstract:

Common mulberry (Morus levigate Wall.) and purple mulberry (Morus rubra L.) species which were selected from different regions of Turkey were used as material in order to determine their performance. Therefore, phenological observations, pomological analysis (fruit size, fruit weight, fruit stalk length, acidity and TSS (Total Soluble Solids) and phytochemical properties organic acids (oxalic acid, succinic acid, citric acid, fumaric acid and malic acid) and vitamin C (ascorbic acid) total phenolics and antioxidant capacity values of mulberries) were determined. Phenological observations of seven different periods were also identified. Fruit weight values varied between 3.48 to 4.26 g. TSS contents value were from 14.36 to 21.30%, and fruit acidity was determined between 0.29 to 2.02%. The amount of ascorbic acid of Finger mulberry (Morus levigate Wall.) and purple mulberry (Morus rubra L.) species were identified as 35.60% and 363.28%. The highest value of total phenolic contents belonged to with a finger mulberry genotypes P1 934.80 mg/100g whereas the lowest one was of purple mulberry genotypes 278.70 mg/100g. FRAP and TEAC methods were used for determination of antioxidant capacity of the values of 0.58-22.65 micromol TE/kg and 20.34-31.6 micromol TE/kg. Total phenolics contents and antioxidant capacity strongly depends on fruit color intensity with a positive correlation. The obtained results have been found to be important as a source of future pharmacological studies and pomological and breeding programs.

Keywords: mulberry, phenology, phytochemical property, pomology

Procedia PDF Downloads 220
2657 Solar Powered Front Wheel Drive (FWD) Electric Trike: An Innovation

Authors: Michael C. Barbecho, Romeo B. Morcilla

Abstract:

This study focused on the development of a solar powered front wheel drive electric trike for personal use and short distance travel, utilizing solar power and a variable speed transmission to adapt in places where varying road grades and unavailability of plug-in charging stations are of great problems. The actual performance of the vehicle was measured in terms of duration of charging using solar power, distance travel and battery power duration, top speed developed at full power, and load capacity. This project followed the research and development process which involved planning, designing, construction, and testing. Solar charging tests revealed that the vehicle requires 6 to 8 hours sunlight exposure to fully charge the batteries. At full charge, the vehicle can travel 35 km utilizing battery power down to 42%. Vehicle showed top speed of 25 kph at 0 to 3% road grade carrying a maximum load of 122 kg. The maximum climbing grade was 23% with the vehicle carrying a maximum load of 122 kg. Technically the project was feasible and can be a potential model for possible conversion of traditional Philippine made “pedicabs” and gasoline engine powered tricycle into modern electric vehicles. Moreover, it has several technical features and advantages over a commercialized electric vehicle such as the use solar charging system and variable speed power transmission and front drive power train for adaptability in any road gradient.

Keywords: electric vehicle, solar vehicles, front drive, solar, solar power

Procedia PDF Downloads 562
2656 Comparative Analysis between Different Proposed Responsive Facade Designs for Reducing the Solar Radiation on the West Facade in the Hot Arid Region

Authors: Merna Ibrahim

Abstract:

Designing buildings which are sustainable and can control and reduce the solar radiation penetrated from the building facades is such an architectural turn. One of the most important methods of saving energy in a building is carefully designing its facade. Building’s facade is one of the most significant contributors to the energy budget as well as the comfort parameters of a building. Responsive architecture adapts to the surrounding environment causing alteration in the envelope configuration to perform in a more effective way. One of the objectives of the responsive facades is to protect the building’s users from the external environment and to achieve a comfortable indoor environment. Solar radiation is one of the aspects that affects the comfortable indoor environment, as well as affects the energy consumption consumed by the HVAC systems for maintaining the indoor comfortable conditions. The aim of the paper is introducing and comparing between four different proposed responsive facade designs in terms of solar radiation reduction on the west facade of a building located in the hot arid region. In addition, the paper highlights the reducing amount of solar radiation for each proposed responsive facade on the west facade. At the end of the paper, a proposal is introduced which combines the four different axis of movements which reduces the solar radiation the most. Moreover, the paper highlights the definition and aim of the responsive architecture, as well as the focusing on the solar radiation aspect in the hot arid zones. Besides, the paper analyzes an international responsive façade building in Essen, Germany, focusing on the type of responsive facades, angle of rotation, mechanism of movement and the effect of the responsive facades on the building’s performance.

Keywords: kinetic facades, mechanism of movement, responsive architecture, solar radiation

Procedia PDF Downloads 148
2655 Development of a Wall Climbing Robotic Ground Penetrating Radar System for Inspection of Vertical Concrete Structures

Authors: Md Omar Faruq Howlader, Tariq Pervez Sattar, Sandra Dudley

Abstract:

This paper describes the design process of a 200 MHz Ground Penetrating Radar (GPR) and a battery powered concrete vertical concrete surface climbing mobile robot. The key design feature is a miniaturized 200 MHz dipole antenna using additional radiating arms and procedure records a reduction of 40% in length compared to a conventional antenna. The antenna set is mounted in front of the robot using a servo mechanism for folding and unfolding purposes. The robot’s adhesion mechanism to climb the reinforced concrete wall is based on neodymium permanent magnets arranged in a unique combination to concentrate and maximize the magnetic flux to provide sufficient adhesion force for GPR installation. The experiments demonstrated the robot’s capability of climbing reinforced concrete wall carrying the attached prototype GPR system and perform floor-to-wall transition and vice versa. The developed GPR’s performance is validated by its capability of detecting and localizing an aluminium sheet and a reinforcement bar (rebar) of 12 mm diameter buried under a test rig built of wood to mimic the concrete structure environment. The present robotic GPR system proves the concept of feasibility of undertaking inspection procedure on large concrete structures in hazardous environments that may not be accessible to human inspectors.

Keywords: climbing robot, dipole antenna, ground penetrating radar (GPR), mobile robots, robotic GPR

Procedia PDF Downloads 266
2654 The DAQ Debugger for iFDAQ of the COMPASS Experiment

Authors: Y. Bai, M. Bodlak, V. Frolov, S. Huber, V. Jary, I. Konorov, D. Levit, J. Novy, D. Steffen, O. Subrt, M. Virius

Abstract:

In general, state-of-the-art Data Acquisition Systems (DAQ) in high energy physics experiments must satisfy high requirements in terms of reliability, efficiency and data rate capability. This paper presents the development and deployment of a debugging tool named DAQ Debugger for the intelligent, FPGA-based Data Acquisition System (iFDAQ) of the COMPASS experiment at CERN. Utilizing a hardware event builder, the iFDAQ is designed to be able to readout data at the average maximum rate of 1.5 GB/s of the experiment. In complex softwares, such as the iFDAQ, having thousands of lines of code, the debugging process is absolutely essential to reveal all software issues. Unfortunately, conventional debugging of the iFDAQ is not possible during the real data taking. The DAQ Debugger is a tool for identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. It provides the layer for an easy integration to any process and has no impact on the process performance. Based on handling of system signals, the DAQ Debugger represents an alternative to conventional debuggers provided by most integrated development environments. Whenever problem occurs, it generates reports containing all necessary information important for a deeper investigation and analysis. The DAQ Debugger was fully incorporated to all processes in the iFDAQ during the run 2016. It helped to reveal remaining software issues and improved significantly the stability of the system in comparison with the previous run. In the paper, we present the DAQ Debugger from several insights and discuss it in a detailed way.

Keywords: DAQ Debugger, data acquisition system, FPGA, system signals, Qt framework

Procedia PDF Downloads 275
2653 Q-Map: Clinical Concept Mining from Clinical Documents

Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala

Abstract:

Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.

Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics

Procedia PDF Downloads 124
2652 Artificial Intelligence Based Abnormality Detection System and Real Valuᵀᴹ Product Design

Authors: Junbeom Lee, Jaehyuck Cho, Wookyeong Jeong, Jonghan Won, Jungmin Hwang, Youngseok Song, Taikyeong Jeong

Abstract:

This paper investigates and analyzes meta-learning technologies that use multiple-cameras to monitor and check abnormal behavior in people in real-time in the area of healthcare fields. Advances in artificial intelligence and computer vision technologies have confirmed that cameras can be useful for individual health monitoring and abnormal behavior detection. Through this, it is possible to establish a system that can respond early by automatically detecting abnormal behavior of the elderly, such as patients and the elderly. In this paper, we use a technique called meta-learning to analyze image data collected from cameras and develop a commercial product to determine abnormal behavior. Meta-learning applies machine learning algorithms to help systems learn and adapt quickly to new real data. Through this, the accuracy and reliability of the abnormal behavior discrimination system can be improved. In addition, this study proposes a meta-learning-based abnormal behavior detection system that includes steps such as data collection and preprocessing, feature extraction and selection, and classification model development. Various healthcare scenarios and experiments analyze the performance of the proposed system and demonstrate excellence compared to other existing methods. Through this study, we present the possibility that camera-based meta-learning technology can be useful for monitoring and testing abnormal behavior in the healthcare area.

Keywords: artificial intelligence, abnormal behavior, early detection, health monitoring

Procedia PDF Downloads 78
2651 Prevention of Green Gentrification: The Case of the Sustainable Urban Policy in Paris

Authors: Elise Machline

Abstract:

In the late 1980’s, sustainable urban development emerged in Europe. Sustainable neighborhoods are one attempt to implement sustainable urban energy planning in the city. So, for twenty years, projects of sustainable neighborhoods (or ‘eco-neighborhoods’) have emerged in Europe. Debates about sustainability no longer restrict it to environmental concerns (to limit greenhouse gas emissions), but rather extend to the economic and social dimensions. A growing number of empirical studies demonstrate that sustainable urbanism yield rental/sale premia, as well as higher occupancy rates and thus higher asset values. For example, European eco neighborhood projects usually focus on the middle to upper classes, given the costs involved in renting or buying the dwellings built in such projects. As a result sustainable residential buildings are not affordable and their construction tends to have a gentrifying effect. An increasing number of countries are institutionalizing green strategies for affordable housing. In France, the sustainable neighborhoods ‘ecoquartier’ must meet environmental performance criteria, have a potential for economic development and, provide social and functional diversity. The issue of social diversity trough the provision of affordable housing has emerged as a dimension of public housing policies. Thus, the ecoquartier residential buildings must be both energy efficient and affordable. Through the Parisian example our study considers how the concept of social diversity and other elements of sustainability are illustrated in the ecoquartiers and whether the authorities have been able to avoid gentrification when implementing a sustainable urban policy.

Keywords: sustainable neighborhoods, social diversity, social housing policies, green buildings

Procedia PDF Downloads 345
2650 Behavioral Patterns of Adopting Digitalized Services (E-Sport versus Sports Spectating) Using Agent-Based Modeling

Authors: Justyna P. Majewska, Szymon M. Truskolaski

Abstract:

The growing importance of digitalized services in the so-called new economy, including the e-sports industry, can be observed recently. Various demographic or technological changes lead consumers to modify their needs, not regarding the services themselves but the method of their application (attracting customers, forms of payment, new content, etc.). In the case of leisure-related to competitive spectating activities, there is a growing need to participate in events whose content is not sports competitions but computer games challenge – e-sport. The literature in this area so far focuses on determining the number of e-sport fans with elements of a simple statistical description (mainly concerning demographic characteristics such as age, gender, place of residence). Meanwhile, the development of the industry is influenced by a combination of many different, intertwined demographic, personality and psychosocial characteristics of customers, as well as the characteristics of their environment. Therefore, there is a need for a deeper recognition of the determinants of the behavioral patterns upon selecting digitalized services by customers, which, in the absence of available large data sets, can be achieved by using econometric simulations – multi-agent modeling. The cognitive aim of the study is to reveal internal and external determinants of behavioral patterns of customers taking into account various variants of economic development (the pace of digitization and technological development, socio-demographic changes, etc.). In the paper, an agent-based model with heterogeneous agents (characteristics of customers themselves and their environment) was developed, which allowed identifying a three-stage development scenario: i) initial interest, ii) standardization, and iii) full professionalization. The probabilities regarding the transition process were estimated using the Method of Simulated Moments. The estimation of the agent-based model parameters and sensitivity analysis reveals crucial factors that have driven a rising trend in e-sport spectating and, in a wider perspective, the development of digitalized services. Among the psychosocial characteristics of customers, they are the level of familiarization with the rules of games as well as sports disciplines, active and passive participation history and individual perception of challenging activities. Environmental factors include general reception of games, number and level of recognition of community builders and the level of technological development of streaming as well as community building platforms. However, the crucial factor underlying the good predictive power of the model is the level of professionalization. While in the initial interest phase, the entry barriers for new customers are high. They decrease during the phase of standardization and increase again in the phase of full professionalization when new customers perceive participation history inaccessible. In this case, they are prone to switch to new methods of service application – in the case of e-sport vs. sports to new content and more modern methods of its delivery. In a wider context, the findings in the paper support the idea of a life cycle of services regarding methods of their application from “traditional” to digitalized.

Keywords: agent-based modeling, digitalized services, e-sport, spectators motives

Procedia PDF Downloads 163
2649 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea

Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal

Abstract:

Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.

Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism

Procedia PDF Downloads 259
2648 Pavement Quality Evaluation Using Intelligent Compaction Technology: Overview of Some Case Studies in Oklahoma

Authors: Sagar Ghos, Andrew E. Elaryan, Syed Ashik Ali, Musharraf Zaman, Mohammed Ashiqur Rahman

Abstract:

Achieving desired density during construction is an important indicator of pavement quality. Insufficient compaction often compromises pavement performance and service life. Intelligent compaction (IC) is an emerging technology for monitoring compaction quality during the construction of asphalt pavements. This paper aims to provide an overview of findings from four case studies in Oklahoma involving the compaction quality of asphalt pavements, namely SE 44th St project (Project 1) and EOC Turnpike project (Project 2), Highway 92 project (Project 3), and 108th Avenue project (Project 4). For this purpose, an IC technology, the intelligent compaction analyzer (ICA), developed at the University of Oklahoma, was used to evaluate compaction quality. Collected data include GPS locations, roller vibrations, roller speed, the direction of movement, and temperature of the asphalt mat. The collected data were analyzed using a widely used software, VETA. The average densities for Projects 1, 2, 3 and 4, were found as 89.8%, 91.50%, 90.7% and 87.5%, respectively. The maximum densities were found as 94.6%, 95.8%, 95.9%, and 89.7% for Projects 1, 2, 3, and 4, respectively. It was observed that the ICA estimated densities correlated well with the field core densities. The ICA results indicated that at least 90% of the asphalt mats were subjected to at least two roller passes. However, the number of passes required to achieve the desired density (94% to 97%) differed from project to project depending on the underlying layer. The results of these case studies show both opportunities and challenges in using IC for monitoring compaction quality during construction in real-time.

Keywords: asphalt pavement construction, density, intelligent compaction, intelligent compaction analyzer, intelligent compaction measure value

Procedia PDF Downloads 145
2647 Simulation of Lean Principles Impact in a Multi-Product Supply Chain

Authors: Matteo Rossini, Alberto Portioli Staudacher

Abstract:

The market competition is moving from the single firm to the whole supply chain one because of increasing competition and growing need for operational efficiencies and customer orientation. Supply chain management allows companies to look beyond their organizational boundaries to develop and leverage resources and capabilities of their supply chain partners. This leads to create competitive advantages in the marketplace and because of this SCM has acquired strategic importance. Lean Approach is a management strategy that focuses on reducing every type of waste present in an organization. This approach is becoming more and more popular among supply chain managers. The supply chain application of lean approach is low diffused. It is not well studied which are the impacts of lean approach principles in a supply chain context. In literature there are only few studies simulating the lean approach performance in single products supply chain. This research work studies the impacts of lean principles implementation along a supply chain. To achieve this, a simulation model of a three-echelon multiproduct product supply chain has been built. Kanban system (and several priority policies) and setup time reduction degrees are implemented in the lean-configured supply chain to apply pull and lot-sizing decrease principles respectively. To evaluate the benefits of lean approach, lean supply chain is compared with an EOQ-configured supply chain. The simulation results show that Kanban system and setup-time reduction improve inventory stock level. They also show that logistics efforts are affected to lean implementation degree. The paper concludes describing performances of lean supply chain in different contexts.

Keywords: inventory policy, Kanban, lean supply chain, simulation study, supply chain management, planning

Procedia PDF Downloads 349
2646 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 328
2645 Mental Health Challenges, Internalizing and Externalizing Behavior Problems, and Academic Challenges among Adolescents from Broken Families

Authors: Fadzai Munyuki

Abstract:

Parental divorce is one of youth's most stressful life events and is associated with long-lasting emotional and behavioral problems. Over the last few decades, research has consistently found strong associations between divorce and adverse health effects in adolescents. Parental divorce has been hypothesized to lead to psychosocial development problems, mental health challenges, internalizing and externalizing behavior problems, and low academic performance among adolescents. This is supported by the Positive youth development theory, which states that a family setup has a major role to play in adolescent development and well-being. So, the focus of this research will be to test this hypothesized process model among adolescents in five provinces in Zimbabwe. A cross-sectional study will be conducted to test this hypothesis, and 1840 (n = 1840) adolescents aged between 14 to 17 will be employed for this study. A Stress and Questionnaire scale, a Child behavior checklist scale, and an academic concept scale will be used for this study. Data analysis will be done using Structural Equations Modeling. This study has many limitations, including the lack of a 'real-time' study, a few cross-sectional studies, a lack of a thorough and validated population measure, and many studies that have been done that have focused on one variable in relation to parental divorce. Therefore, this study seeks to bridge this gap between past research and current literature by using a validated population measure, a real-time study, and combining three latent variables in this study.

Keywords: mental health, internalizing and externalizing behavior, divorce, academic achievements

Procedia PDF Downloads 63
2644 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network

Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi

Abstract:

Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.

Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication

Procedia PDF Downloads 439