Search results for: indoor network performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16451

Search results for: indoor network performance

8441 Radio-Frequency Identification (RFID) Based Smart Helmet for Coal Miners

Authors: Waheeda Jabbar, Ali Gul, Rida Noor, Sania Kurd, Saba Gulzar

Abstract:

Hundreds of miners die from mining accidents each year due to poisonous gases found underground mining areas. This paper proposed an idea to protect the precious lives of mining workers. A supervising system is designed which is based on ZigBee wireless technique along with the smart protective helmets to detect real-time surveillance and it gives early warnings on presence of different poisonous gases in order to save mineworkers from any danger caused by these poisonous gases. A wireless sensor network is established using ZigBee wireless technique by integrating sensors on the helmet, apart from this helmet have embedded heartbeat sensor to detect the pulse rate and be aware of the physical or mental strength of a mineworker to increase the potential safety. Radio frequency identification (RFID) technology is used to find the location of workers. A ZigBee based base station is set-upped to control the communication. The idea is implemented and results are verified through experiment.

Keywords: Arduino, gas sensor (MQ7), RFID, wireless ZigBee

Procedia PDF Downloads 432
8440 Rural Sanitation in India: Special Context in the State of Odisa

Authors: Monalisha Ghosh, Asit Mohanty

Abstract:

The lack of sanitation increases living costs, decreases spend on education and nutrition, lowers income earning potential, and threatens safety and welfare. This is especially true for rural India. Only 32% of rural households have their own toilets and that less than half of Indian households have a toilet at home. Of the estimated billion people in the world who defecate in the open, more than half reside in rural India. It is empirically established that poor sanitation leads to high infant mortality rate and low income generation in rural India. In India, 1,600 children die every day before reaching their fifth birthday and 24% of girls drop out of school as the lack of basic sanitation. Above all, lack of sanitation is not a symptom of poverty but a major contributing factor. According to census 2011, 67.3% of the rural households in the country still did not have access to sanitation facilities. India’s sanitation deficit leads to losses worth roughly 6% of its gross domestic product (GDP) according to World Bank estimates by raising the disease burden in the country. The dropout rate for girl child is thirty percent in schools in rural areas because of lack of sanitation facilities for girl students. The productivity loss per skilled labors during a year is calculated at Rs.44, 160 in Odisha. The performance of the state of Odisha has not been satisfactory in improving sanitation facilities. The biggest challenge is triggering behavior change in vast section of rural population regarding need to use toilets. Another major challenge is funding and implementation for improvement of sanitation facility. In an environment of constrained economic resources, Public Private Partnership in form of performance based management or maintenance contract will be all the more relevant to improve the sanitation status in rural sector.

Keywords: rural sanitation, infant mortality rate, income, granger causality, pooled OLS method test public private partnership

Procedia PDF Downloads 404
8439 Catalytic Performance of Fe3O4 Nanoparticles (Fe3O4 NPs) in the Synthesis of Pyrazolines

Authors: Ali Gharib, Leila Vojdanifard, Nader Noroozi Pesyan

Abstract:

Different Pyrazoline derivatives were synthesized by cyclization of substituted chalcone derivatives in presence of hydrazine hydrate. A series of novel 1,3,5-triaryl pyrazoline derivatives has been synthesized by the reaction of chalcone and phenylhydrazine in the presence of the Fe3O4 NPs, in high yields. The structures of compounds obtained were determined by IR and 1H NMR spectra. Fe3O4 NPs was recycled and no appreciable change in activity was noticed after three cycles.

Keywords: pyrazoline, chalcone, nanoparticles, Fe3O4, catalyst, synthesis

Procedia PDF Downloads 382
8438 Flood Susceptibility Assessment of Mandaluyong City Using Analytic Hierarchy Process

Authors: Keigh D. Guinto, Ma. Romina M. Santos

Abstract:

One of the most catastrophic natural disasters in the Philippines is floods. Twelve (12) million people reside in Metro Manila, National Capital Region (NCR), prone to flooding. A flood can cause widespread devastation resulting in damaged properties and infrastructures and loss of life. By using the analytical hierarchy process, six (6) parameters were selected, namely elevation, slope, lithology, distance from the river, river network density, and flow accumulation. Ranking of these parameters demonstrates that distance from the river with 25.31% and river density with 17.30% ranked the highest causative factor to flooding. This is followed by flow accumulation with 16.72%, elevation with 15.33%, slope with 13.53%, and the least flood causative factor is lithology with 11.8%. The generated flood susceptibility map of Mandaluyong has three (3) classes: high susceptibility, moderate susceptibility, and low susceptibility. The flood susceptibility map generated in this study can be used as an aid for planning flood mitigation, land use planning, and general public awareness. This study can also be used for emergency management and can be applied in the disaster risk management of Mandaluyong.

Keywords: analytical hierarchy process, assessment, flood, geographic information system

Procedia PDF Downloads 175
8437 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 165
8436 Designing Energy Efficient Buildings for Seasonal Climates Using Machine Learning Techniques

Authors: Kishor T. Zingre, Seshadhri Srinivasan

Abstract:

Energy consumption by the building sector is increasing at an alarming rate throughout the world and leading to more building-related CO₂ emissions into the environment. In buildings, the main contributors to energy consumption are heating, ventilation, and air-conditioning (HVAC) systems, lighting, and electrical appliances. It is hypothesised that the energy efficiency in buildings can be achieved by implementing sustainable technologies such as i) enhancing the thermal resistance of fabric materials for reducing heat gain (in hotter climates) and heat loss (in colder climates), ii) enhancing daylight and lighting system, iii) HVAC system and iv) occupant localization. Energy performance of various sustainable technologies is highly dependent on climatic conditions. This paper investigated the use of machine learning techniques for accurate prediction of air-conditioning energy in seasonal climates. The data required to train the machine learning techniques is obtained using the computational simulations performed on a 3-story commercial building using EnergyPlus program plugged-in with OpenStudio and Google SketchUp. The EnergyPlus model was calibrated against experimental measurements of surface temperatures and heat flux prior to employing for the simulations. It has been observed from the simulations that the performance of sustainable fabric materials (for walls, roof, and windows) such as phase change materials, insulation, cool roof, etc. vary with the climate conditions. Various renewable technologies were also used for the building flat roofs in various climates to investigate the potential for electricity generation. It has been observed that the proposed technique overcomes the shortcomings of existing approaches, such as local linearization or over-simplifying assumptions. In addition, the proposed method can be used for real-time estimation of building air-conditioning energy.

Keywords: building energy efficiency, energyplus, machine learning techniques, seasonal climates

Procedia PDF Downloads 105
8435 Computer Aided Diagnostic System for Detection and Classification of a Brain Tumor through MRI Using Level Set Based Segmentation Technique and ANN Classifier

Authors: Atanu K Samanta, Asim Ali Khan

Abstract:

Due to the acquisition of huge amounts of brain tumor magnetic resonance images (MRI) in clinics, it is very difficult for radiologists to manually interpret and segment these images within a reasonable span of time. Computer-aided diagnosis (CAD) systems can enhance the diagnostic capabilities of radiologists and reduce the time required for accurate diagnosis. An intelligent computer-aided technique for automatic detection of a brain tumor through MRI is presented in this paper. The technique uses the following computational methods; the Level Set for segmentation of a brain tumor from other brain parts, extraction of features from this segmented tumor portion using gray level co-occurrence Matrix (GLCM), and the Artificial Neural Network (ANN) to classify brain tumor images according to their respective types. The entire work is carried out on 50 images having five types of brain tumor. The overall classification accuracy using this method is found to be 98% which is significantly good.

Keywords: brain tumor, computer-aided diagnostic (CAD) system, gray-level co-occurrence matrix (GLCM), tumor segmentation, level set method

Procedia PDF Downloads 493
8434 1D Convolutional Networks to Compute Mel-Spectrogram, Chromagram, and Cochleogram for Audio Networks

Authors: Elias Nemer, Greg Vines

Abstract:

Time-frequency transformation and spectral representations of audio signals are commonly used in various machine learning applications. Training networks on frequency features such as the Mel-Spectrogram or Cochleogram have been proven more effective and convenient than training on-time samples. In practical realizations, these features are created on a different processor and/or pre-computed and stored on disk, requiring additional efforts and making it difficult to experiment with different features. In this paper, we provide a PyTorch framework for creating various spectral features as well as time-frequency transformation and time-domain filter-banks using the built-in trainable conv1d() layer. This allows computing these features on the fly as part of a larger network and enabling easier experimentation with various combinations and parameters. Our work extends the work in the literature developed for that end: First, by adding more of these features and also by allowing the possibility of either starting from initialized kernels or training them from random values. The code is written as a template of classes and scripts that users may integrate into their own PyTorch classes or simply use as is and add more layers for various applications.

Keywords: neural networks Mel-Spectrogram, chromagram, cochleogram, discrete Fourrier transform, PyTorch conv1d()

Procedia PDF Downloads 217
8433 Heterogeneous Reactions to Digital Opportunities: A Field Study

Authors: Bangaly Kaba

Abstract:

In the global information society, the importance of the Internet cannot be overemphasized. Africa needs access to the powerful information and communication tools of the Internet in order to obtain the resources and efficiency essential for sustainable development. Unfortunately, in 2013, the data from Internetworldstats showed only 15% of African populations have access to Internet. This relative low Internet penetration rate signals a problem that may threaten the economic development, governmental efficiency, and ultimately the global competitiveness of African countries. Many initiatives were undertaken to bring the benefits of the global information revolution to the people of Africa, through connection to the Internet and other Global Information Infrastructure technologies. The purpose is to understand differences between socio-economically advantaged and disadvantaged internet users. From that, we will determine what prevents disadvantaged groups from benefiting from Internet usage. Data were collected through a survey from Internet users in Ivory Coast. The results reveal that Personal network exposure, Self-efficacy and Availability are the key drivers of continued use intention for the socio-economically disadvantaged group. The theoretical and practical implications are also described.

Keywords: digital inequality, internet, integrative model, socio-economically advantaged and disadvantaged, use continuance, Africa

Procedia PDF Downloads 460
8432 Spare Part Inventory Optimization Policy: A Study Literature

Authors: Zukhrof Romadhon, Nani Kurniati

Abstract:

Availability of Spare parts is critical to support maintenance tasks and the production system. Managing spare part inventory deals with some parameters and objective functions, as well as the tradeoff between inventory costs and spare parts availability. Several mathematical models and methods have been developed to optimize the spare part policy. Many researchers who proposed optimization models need to be considered to identify other potential models. This work presents a review of several pertinent literature on spare part inventory optimization and analyzes the gaps for future research. Initial investigation on scholars and many journal database systems under specific keywords related to spare parts found about 17K papers. Filtering was conducted based on five main aspects, i.e., replenishment policy, objective function, echelon network, lead time, model solving, and additional aspects of part classification. Future topics could be identified based on the number of papers that haven’t addressed specific aspects, including joint optimization of spare part inventory and maintenance.

Keywords: spare part, spare part inventory, inventory model, optimization, maintenance

Procedia PDF Downloads 40
8431 Agile Software Effort Estimation Using Regression Techniques

Authors: Mikiyas Adugna

Abstract:

Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.

Keywords: agile software development, effort estimation, elastic net regression, LASSO

Procedia PDF Downloads 44
8430 Procedure to Optimize the Performance of Chemical Laser Using the Genetic Algorithm Optimizations

Authors: Mohammedi Ferhate

Abstract:

This work presents details of the study of the entire flow inside the facility where the exothermic chemical reaction process in the chemical laser cavity is analyzed. In our paper we will describe the principles of chemical lasers where flow reversal is produced by chemical reactions. We explain the device for converting chemical potential energy laser energy. We see that the phenomenon thus has an explosive trend. Finally, the feasibility and effectiveness of the proposed method is demonstrated by computer simulation

Keywords: genetic, lasers, nozzle, programming

Procedia PDF Downloads 79
8429 How to Integrate Sustainability in Technological Degrees: Robotics at UPC

Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu

Abstract:

Embedding Sustainability in technological curricula has become a crucial factor for educating engineers with competences in sustainability. The Technical University of Catalonia UPC, in 2008, designed the Sustainable Technology Excellence Program STEP 2015 in order to assure a successful Sustainability Embedding. This Program takes advantage of the opportunity that the redesign of all Bachelor and Master Degrees in Spain by 2010 under the European Higher Education Area framework offered. The STEP program goals are: to design compulsory courses in each degree; to develop the conceptual base and identify reference models in sustainability for all specialties at UPC; to create an internal interdisciplinary network of faculty from all the schools; to initiate new transdisciplinary research activities in technology-sustainability-education; to spread the know/how attained; to achieve international scientific excellence in technology-sustainability-education and to graduate the first engineers/architects of the new EHEA bachelors with sustainability as a generic competence. Specifically, in this paper authors explain their experience in leading the STEP program, and two examples are presented: Industrial Robotics subject and the curriculum for the School of Architecture.

Keywords: sustainability, curricula improvement, robotics, STEP program

Procedia PDF Downloads 391
8428 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 156
8427 Near-Peer Mentoring/Curriculum and Community Enterprise for Environmental Restoration Science

Authors: Lauren B. Birney

Abstract:

The BOP-CCERS (Billion Oyster Project- Curriculum and Community Enterprise for Restoration Science) Near-Peer Mentoring Program provides the long-term (five-year) support network to motivate and guide students toward restoration science-based CTE pathways. Students are selected from middle schools with actively participating BOP-CCERS teachers. Teachers will nominate students from grades 6-8 to join cohorts of between 10 and 15 students each. Cohorts are comprised primarily of students from the same school in order to facilitate mentors' travel logistics as well as to sustain connections with students and their families. Each cohort is matched with an exceptional undergraduate or graduate student, either a BOP research associate or STEM mentor recruited from collaborating City University of New York (CUNY) partner programs. In rare cases, an exceptional high school junior or senior may be matched with a cohort in addition to a research associate or graduate student. In no case is a high school student or minor be placed individually with a cohort. Mentors meet with students at least once per month and provide at least one offsite field visit per month, either to a local STEM Hub or research lab. Keeping with its five-year trajectory, the near-peer mentoring program will seek to retain students in the same cohort with the same mentor for the full duration of middle school and for at least two additional years of high school. Upon reaching the final quarter of 8th grade, the mentor will develop a meeting plan for each individual mentee. The mentee and the mentor will be required to meet individually or in small groups once per month. Once per quarter, individual meetings will be substituted for full cohort professional outings. The mentor will organize the entire cohort on a field visit or educational workshop with a museum or aquarium partner. In addition to the mentor-mentee relationship, each participating student will also be asked to conduct and present his or her own BOP field research. This research is ideally carried out with the support of the students’ regular high school STEM subject teacher; however, in cases where the teacher or school does not permit independent study, the student will be asked to conduct the research on an extracurricular basis. Near-peer mentoring affects students’ social identities and helps them to connect to role models from similar groups, ultimately giving them a sense of belonging. Qualitative and quantitative analytics were performed throughout the study. Interviews and focus groups also ensued. Additionally, an external evaluator was utilized to ensure project efficacy, efficiency, and effectiveness throughout the entire project. The BOP-CCERS Near Peer Mentoring program is a peer support network in which high school students with interest or experience in BOP (Billion Oyster Project) topics and activities (such as classroom oyster tanks, STEM Hubs, or digital platform research) provide mentorship and support for middle school or high school freshmen mentees. Peer mentoring not only empowers those students being taught but also increases the content knowledge and engagement of mentors. This support provides the necessary resources, structure, and tools to assist students in finding success.

Keywords: STEM education, environmental science, citizen science, near peer mentoring

Procedia PDF Downloads 79
8426 Empirical Modeling and Spatial Analysis of Heat-Related Morbidity in Maricopa County, Arizona

Authors: Chuyuan Wang, Nayan Khare, Lily Villa, Patricia Solis, Elizabeth A. Wentz

Abstract:

Maricopa County, Arizona, has a semi-arid hot desert climate that is one of the hottest regions in the United States. The exacerbated urban heat island (UHI) effect caused by rapid urbanization has made the urban area even hotter than the rural surroundings. The Phoenix metropolitan area experiences extremely high temperatures in the summer from June to September that can reach the daily highest of 120 °F (48.9 °C). Morbidity and mortality due to the environmental heat is, therefore, a significant public health issue in Maricopa County, especially because it is largely preventable. Public records from the Maricopa County Department of Public Health (MCDPH) revealed that between 2012 and 2016, there were 10,825 incidents of heat-related morbidity incidents, 267 outdoor environmental heat deaths, and 173 indoor heat-related deaths. A lot of research has examined heat-related death and its contributing factors around the world, but little has been done regarding heat-related morbidity issues, especially for regions that are naturally hot in the summer. The objective of this study is to examine the demographic, socio-economic, housing, and environmental factors that contribute to heat-related morbidity in Maricopa County. We obtained heat-related morbidity data between 2012 and 2016 at census tract level from MCDPH. Demographic, socio-economic, and housing variables were derived using 2012-2016 American Community Survey 5-year estimate from the U.S. Census. Remotely sensed Landsat 7 ETM+ and Landsat 8 OLI satellite images and Level-1 products were acquired for all the summer months (June to September) from 2012 and 2016. The National Land Cover Database (NLCD) 2016 percent tree canopy and percent developed imperviousness data were obtained from the U.S. Geological Survey (USGS). We used ordinary least squares (OLS) regression analysis to examine the empirical relationship between all the independent variables and heat-related morbidity rate. Results showed that higher morbidity rates are found in census tracts with higher values in population aged 65 and older, population under poverty, disability, no vehicle ownership, white non-Hispanic, population with less than high school degree, land surface temperature, and surface reflectance, but lower values in normalized difference vegetation index (NDVI) and housing occupancy. The regression model can be used to explain up to 59.4% of total variation of heat-related morbidity in Maricopa County. The multiscale geographically weighted regression (MGWR) technique was then used to examine the spatially varying relationships between heat-related morbidity rate and all the significant independent variables. The R-squared value of the MGWR model increased to 0.691, that shows a significant improvement in goodness-of-fit than the global OLS model, which means that spatial heterogeneity of some independent variables is another important factor that influences the relationship with heat-related morbidity in Maricopa County. Among these variables, population aged 65 and older, the Hispanic population, disability, vehicle ownership, and housing occupancy have much stronger local effects than other variables.

Keywords: census, empirical modeling, heat-related morbidity, spatial analysis

Procedia PDF Downloads 111
8425 Planning of Construction Material Flow Using Hybrid Simulation Modeling

Authors: A. M. Naraghi, V. Gonzalez, M. O'Sullivan, C. G. Walker, M. Poshdar, F. Ying, M. Abdelmegid

Abstract:

Discrete Event Simulation (DES) and Agent Based Simulation (ABS) are two simulation approaches that have been proposed to support decision-making in the construction industry. Despite the wide use of these simulation approaches in the construction field, their applications for production and material planning is still limited. This is largely due to the dynamic and complex nature of construction material supply chain systems. Moreover, managing the flow of construction material is not well integrated with site logistics in traditional construction planning methods. This paper presents a hybrid of DES and ABS to simulate on-site and off-site material supply processes. DES is applied to determine the best production scenarios with information of on-site production systems, while ABS is used to optimize the supply chain network. A case study of a construction piling project in New Zealand is presented illustrating the potential benefits of using the proposed hybrid simulation model in construction material flow planning. The hybrid model presented can be used to evaluate the impact of different decisions on construction supply chain management.

Keywords: construction supply-chain management, simulation modeling, decision-support tools, hybrid simulation

Procedia PDF Downloads 189
8424 Threat of Islamic State of Khorasan in Pakistan and Afghanistan Region: Impact on Regional Security

Authors: Irfan U. Din

Abstract:

The growing presence and operational capacity of Islamic State aka Daesh, which emerged in Pak-Afghan region in 2015, poses a serious threat to the already fragile state of the security situation in the region. This paper will shed light on the current state of IS-K network in the Pak-Afghan region and will explain how its presence and operational capacity in the northern and central Afghanistan has increased despite intensive military operations against the group in Nangarhar province – the stronghold of IS-K. It will also explore the role of Pakistani Taliban in the emergence and expansion of IS-K in the region and will unveil the security implication of growing nexus of IS-K and transnational organized groups for the region in Post NATO withdrawal scenario. The study will be qualitative and will rely on secondary and primary data to explore the topic. For secondary data existing literature on the topic will be extensively reviewed while for primary data in-depth interviews will be conducted with subject experts, Taliban commanders, and field researchers.

Keywords: Islamic State of Khorasan (IS-K), North Atlantic Treaty Organization (NATO), Pak-Afghan Region, Transnational Organized Crime (TNOC)

Procedia PDF Downloads 280
8423 Deep-Learning to Generation of Weights for Image Captioning Using Part-of-Speech Approach

Authors: Tiago do Carmo Nogueira, Cássio Dener Noronha Vinhal, Gélson da Cruz Júnior, Matheus Rudolfo Diedrich Ullmann

Abstract:

Generating automatic image descriptions through natural language is a challenging task. Image captioning is a task that consistently describes an image by combining computer vision and natural language processing techniques. To accomplish this task, cutting-edge models use encoder-decoder structures. Thus, Convolutional Neural Networks (CNN) are used to extract the characteristics of the images, and Recurrent Neural Networks (RNN) generate the descriptive sentences of the images. However, cutting-edge approaches still suffer from problems of generating incorrect captions and accumulating errors in the decoders. To solve this problem, we propose a model based on the encoder-decoder structure, introducing a module that generates the weights according to the importance of the word to form the sentence, using the part-of-speech (PoS). Thus, the results demonstrate that our model surpasses state-of-the-art models.

Keywords: gated recurrent units, caption generation, convolutional neural network, part-of-speech

Procedia PDF Downloads 85
8422 Neural Network Approach to Classifying Truck Traffic

Authors: Ren Moses

Abstract:

The process of classifying vehicles on a highway is hereby viewed as a pattern recognition problem in which connectionist techniques such as artificial neural networks (ANN) can be used to assign vehicles to their correct classes and hence to establish optimum axle spacing thresholds. In the United States, vehicles are typically classified into 13 classes using a methodology commonly referred to as “Scheme F”. In this research, the ANN model was developed, trained, and applied to field data of vehicles. The data comprised of three vehicular features—axle spacing, number of axles per vehicle, and overall vehicle weight. The ANN reduced the classification error rate from 9.5 percent to 6.2 percent when compared to an existing classification algorithm that is not ANN-based and which uses two vehicular features for classification, that is, axle spacing and number of axles. The inclusion of overall vehicle weight as a third classification variable further reduced the error rate from 6.2 percent to only 3.0 percent. The promising results from the neural networks were used to set up new thresholds that reduce classification error rate.

Keywords: artificial neural networks, vehicle classification, traffic flow, traffic analysis, and highway opera-tions

Procedia PDF Downloads 297
8421 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques

Authors: Chandu Rathnayake, Isuri Anuradha

Abstract:

Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.

Keywords: CNN, random forest, decision tree, machine learning, deep learning

Procedia PDF Downloads 64
8420 The Effectiveness of Synthesizing A-Pillar Structures in Passenger Cars

Authors: Chris Phan, Yong Seok Park

Abstract:

The Toyota Camry is one of the best-selling cars in America. It is economical, reliable, and most importantly, safe. These attributes allowed the Camry to be the trustworthy choice when choosing dependable vehicle. However, a new finding brought question to the Camry’s safety. Since 1997, the Camry received a “good” rating on its moderate overlap front crash test through the Insurance Institute of Highway Safety. In 2012, the Insurance Institute of Highway Safety introduced a frontal small overlap crash test into the overall evaluation of vehicle occupant safety test. The 2012 Camry received a “poor” rating on this new test, while the 2015 Camry redeemed itself with a “good” rating once again. This study aims to find a possible solution that Toyota implemented to reduce the severity of a frontal small overlap crash in the Camry during a mid-cycle update. The purpose of this study is to analyze and evaluate the performance of various A-pillar shapes as energy absorbing structures in improving passenger safety in a frontal crash. First, A-pillar structures of the 2012 and 2015 Camry were modeled using CAD software, namely SolidWorks. Then, a crash test simulation using ANSYS software, was applied to the A-pillars to analyze the behavior of the structures in similar conditions. Finally, the results were compared to safety values of cabin intrusion to determine the crashworthy behaviors of both A-pillar structures by measuring total deformation. This study highlights that it is possible that Toyota improved the shape of the A-pillar in the 2015 Camry in order to receive a “good” rating from the IIHS safety evaluation once again. These findings can possibly be used to increase safety performance in future vehicles to decrease passenger injury or fatality.

Keywords: A-pillar, Crashworthiness, Design Synthesis, Finite Element Analysis

Procedia PDF Downloads 102
8419 A Deep Learning-Based Pedestrian Trajectory Prediction Algorithm

Authors: Haozhe Xiang

Abstract:

With the rise of the Internet of Things era, intelligent products are gradually integrating into people's lives. Pedestrian trajectory prediction has become a key issue, which is crucial for the motion path planning of intelligent agents such as autonomous vehicles, robots, and drones. In the current technological context, deep learning technology is becoming increasingly sophisticated and gradually replacing traditional models. The pedestrian trajectory prediction algorithm combining neural networks and attention mechanisms has significantly improved prediction accuracy. Based on in-depth research on deep learning and pedestrian trajectory prediction algorithms, this article focuses on physical environment modeling and learning of historical trajectory time dependence. At the same time, social interaction between pedestrians and scene interaction between pedestrians and the environment were handled. An improved pedestrian trajectory prediction algorithm is proposed by analyzing the existing model architecture. With the help of these improvements, acceptable predicted trajectories were successfully obtained. Experiments on public datasets have demonstrated the algorithm's effectiveness and achieved acceptable results.

Keywords: deep learning, graph convolutional network, attention mechanism, LSTM

Procedia PDF Downloads 49
8418 Evaluation of Coupled CFD-FEA Simulation for Fire Determination

Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham

Abstract:

Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.

Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids

Procedia PDF Downloads 73
8417 Workplace Development Programmes for Small and Medium-Sized Enterprises in Europe and Singapore: A Conceptual Study

Authors: Zhan Jie How

Abstract:

With the heightened awareness of workplace learning and its impact on improving organizational performance and developing employee competence, governments and corporations around the world are forced to intensify their cooperation to establish national workplace development programmes to guide these corporations in fostering engaging and collaborative workplace learning cultures. This conceptual paper aims to conduct a comparative study of existing workplace development programmes for small and medium-sized enterprises (SMEs) in Europe and Singapore, focusing primarily on the Swedish Production Leap, Finnish TEKES Liideri Programme, and Singapore SkillsFuture SME Mentors Programme. The study carries out a systematic review of the three workplace development programmes to examine the roles of external mentors or coaches in influencing the design and implementation of workplace learning strategies and practices in SMEs. Organizational, personal and external factors that promote or inhibit effective workplace mentorship are also scrutinized, culminating in a critical comparison and evaluation of the strengths and weaknesses of the aforementioned programmes. Based on the findings from the review and analyses, a heuristic conceptual framework is developed to illustrate the complex interrelationships among external workplace development programmes, internal learning and development initiatives instituted by the organization’s higher management, and employees' continuous learning activities at the workplace. The framework also includes a set of guiding principles that can be used as the basis for internal mediation between the competing perspectives of mentors and mentees (employers and employees of the organization) regarding workplace learning conditions, practices and their intended impact on the organization. The conceptual study provides a theoretical blueprint for future empirical research on organizational workplace learning and the impact of government-initiated workplace development programmes.

Keywords: employee competence, mentorship, organizational performance, workplace development programme, workplace learning culture

Procedia PDF Downloads 128
8416 Hydrogen Storage Systems for Enhanced Grid Balancing Services in Wind Energy Conversion Systems

Authors: Nezmin Kayedpour, Arash E. Samani, Siavash Asiaban, Jeroen M. De Kooning, Lieven Vandevelde, Guillaume Crevecoeur

Abstract:

The growing adoption of renewable energy sources, such as wind power, in electricity generation is a significant step towards a sustainable and decarbonized future. However, the inherent intermittency and uncertainty of wind resources pose challenges to the reliable and stable operation of power grids. To address this, hydrogen storage systems have emerged as a promising and versatile technology to support grid balancing services in wind energy conversion systems. In this study, we propose a supplementary control design that enhances the performance of the hydrogen storage system by integrating wind turbine (WT) pitch and torque control systems. These control strategies aim to optimize the hydrogen production process, ensuring efficient utilization of wind energy while complying with grid requirements. The wind turbine pitch control system plays a crucial role in managing the turbine's aerodynamic performance. By adjusting the blade pitch angle, the turbine's rotational speed and power output can be regulated. Our proposed control design dynamically coordinates the pitch angle to match the wind turbine's power output with the optimal hydrogen production rate. This ensures that the electrolyzer receives a steady and optimal power supply, avoiding unnecessary strain on the system during high wind speeds and maximizing hydrogen production during low wind speeds. Moreover, the wind turbine torque control system is incorporated to facilitate efficient operation at varying wind speeds. The torque control system optimizes the energy capture from the wind while limiting mechanical stress on the turbine components. By harmonizing the torque control with hydrogen production requirements, the system maintains stable wind turbine operation, thereby enhancing the overall energy-to-hydrogen conversion efficiency. To enable grid-friendly operation, we introduce a cascaded controller that regulates the electrolyzer's electrical power-current in accordance with grid requirements. This controller ensures that the hydrogen production rate can be dynamically adjusted based on real-time grid demands, supporting grid balancing services effectively. By maintaining a close relationship between the wind turbine's power output and the electrolyzer's current, the hydrogen storage system can respond rapidly to grid fluctuations and contribute to enhanced grid stability. In this paper, we present a comprehensive analysis of the proposed supplementary control design's impact on the overall performance of the hydrogen storage system in wind energy conversion systems. Through detailed simulations and case studies, we assess the system's ability to provide grid balancing services, maximize wind energy utilization, and reduce greenhouse gas emissions.

Keywords: active power control, electrolyzer, grid balancing services, wind energy conversion systems

Procedia PDF Downloads 70
8415 Computer-Based Model for Design Selection of Lightning Arrester for 132/33kV Substation

Authors: Uma U. Uma, Uzoechi Laz

Abstract:

Protection of equipment insulation against lightning over voltages and selection of lightning arrester that will discharge at lower voltage level than the voltage required to breakdown the electrical equipment insulation is examined. The objectives of this paper are to design a computer based model using standard equations for the selection of appropriate lightning arrester with the lowest rated surge arrester that will provide adequate protection of equipment insulation and equally have a satisfactory service life when connected to a specified line voltage in power system network. The effectiveness and non-effectiveness of the earthing system of substation determine arrester properties. MATLAB program with GUI (graphic user interphase) its subprogram is used in the development of the model for the determination of required parameters like voltage rating, impulse spark over voltage, power frequency spark over voltage, discharge current, current rating and protection level of lightning arrester of a specified voltage level of a particular line.

Keywords: lightning arrester, GUIs, MatLab program, computer based model

Procedia PDF Downloads 404
8414 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 209
8413 Rheological Characteristics of Ice Slurries Based on Propylene- and Ethylene-Glycol at High Ice Fractions

Authors: Senda Trabelsi, Sébastien Poncet, Michel Poirier

Abstract:

Ice slurries are considered as a promising phase-changing secondary fluids for air-conditioning, packaging or cooling industrial processes. An experimental study has been here carried out to measure the rheological characteristics of ice slurries. Ice slurries consist in a solid phase (flake ice crystals) and a liquid phase. The later is composed of a mixture of liquid water and an additive being here either (1) Propylene-Glycol (PG) or (2) Ethylene-Glycol (EG) used to lower the freezing point of water. Concentrations of 5%, 14% and 24% of both additives are investigated with ice mass fractions ranging from 5% to 85%. The rheological measurements are carried out using a Discovery HR-2 vane-concentric cylinder with four full-length blades. The experimental results show that the behavior of ice slurries is generally non-Newtonian with shear-thinning or shear-thickening behaviors depending on the experimental conditions. In order to determine the consistency and the flow index, the Herschel-Bulkley model is used to describe the behavior of ice slurries. The present results are finally validated against an experimental database found in the literature and the predictions of an Artificial Neural Network model.

Keywords: ice slurry, propylene-glycol, ethylene-glycol, rheology

Procedia PDF Downloads 248
8412 Indigenous Childhood: Upbringing and Schooling in Two Indigenous Communities from Argentina (Qom and Mbyá)

Authors: Ana Carolina Hecht, Noelia Enriz, Mariana Garcia Palacios

Abstract:

The South American anthropology has been recently focused to research with children in different contexts. In our researches with children from indigenous communities in the lowlands and highlands of South America (Qom and Mbyá), we especially considered social categories that define the different ways of being a boy and a girl. In this way, we built an approach to disrupt monolithic models of childhood. The aim of this paper is to tackle the first stage of life, demarcated from their nominal references and from the upbringing and formative experiences in which children participate. So, we will focus on the network of social relations in the period of childhood, making especial focus on language develops, religion, schooling and games. The crossing of our different thematic interests allows us to consider the complexity of knowledge and skills that come into play during the development of children. Methodologically, this text is based on an ethnographic approach, with frequent visits and periods of cohabitation, for more than a decade with Mbyá and Qom people, who lives within indigenous communities in the provinces of Chaco, Buenos Aires and Misiones, in Argentina. We made participant observation and interviews with children and their families, with the objective to include children's voices in our researches about the whole community.

Keywords: chidhood, indigenous people, schooling, upbringing

Procedia PDF Downloads 326