Search results for: logistics network optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7907

Search results for: logistics network optimization

1307 Implementation of Conceptual Real-Time Embedded Functional Design via Drive-By-Wire ECU Development

Authors: Ananchai Ukaew, Choopong Chauypen

Abstract:

Design concepts of real-time embedded system can be realized initially by introducing novel design approaches. In this literature, model based design approach and in-the-loop testing were employed early in the conceptual and preliminary phase to formulate design requirements and perform quick real-time verification. The design and analysis methodology includes simulation analysis, model based testing, and in-the-loop testing. The design of conceptual drive-by-wire, or DBW, algorithm for electronic control unit, or ECU, was presented to demonstrate the conceptual design process, analysis, and functionality evaluation. The concepts of DBW ECU function can be implemented in the vehicle system to improve electric vehicle, or EV, conversion drivability. However, within a new development process, conceptual ECU functions and parameters are needed to be evaluated. As a result, the testing system was employed to support conceptual DBW ECU functions evaluation. For the current setup, the system components were consisted of actual DBW ECU hardware, electric vehicle models, and control area network or CAN protocol. The vehicle models and CAN bus interface were both implemented as real-time applications where ECU and CAN protocol functionality were verified according to the design requirements. The proposed system could potentially benefit in performing rapid real-time analysis of design parameters for conceptual system or software algorithm development.

Keywords: drive-by-wire ECU, in-the-loop testing, model-based design, real-time embedded system

Procedia PDF Downloads 350
1306 In-Farm Wood Gasification Energy Micro-Generation System in Brazil: A Monte Carlo Viability Simulation

Authors: Erich Gomes Schaitza, Antônio Francisco Savi, Glaucia Aparecida Prates

Abstract:

The penetration of renewable energy into the electricity supply in Brazil is high, one of the highest in the World. Centralized hydroelectric generation is the main source of energy, followed by biomass and wind. Surprisingly, mini and micro-generation are negligible, with less than 2,000 connections to the national grid. In 2015, a new regulatory framework was put in place to change this situation. In the agricultural sector, the framework was complemented by the offer of low interest rate loans to in-farm renewable generation. Brazil proposed to more than double its area of planted forests as part of its INDC- Intended Nationally Determined Contributions to the UNFCCC-U.N. Framework Convention on Climate Change (UNFCCC). This is an ambitious target which will be achieved only if forests are attractive to farmers. Therefore, this paper analyses whether planting forests for in-farm energy generation with a with a woodchip gasifier is economically viable for microgeneration under the new framework and at if they could be an economic driver for forest plantation. At first, a static case was analyzed with data from Eucalyptus plantations in five farms. Then, a broader analysis developed with the use of Monte Carlo technique. Planting short rotation forests to generate energy could be a viable alternative and the low interest loans contribute to that. There are some barriers to such systems such as the inexistence of a mature market for small scale equipment and of a reference network of good practices and examples.

Keywords: biomass, distribuited generation, small-scale, Monte Carlo

Procedia PDF Downloads 286
1305 Deep Learning Approach for Chronic Kidney Disease Complications

Authors: Mario Isaza-Ruget, Claudia C. Colmenares-Mejia, Nancy Yomayusa, Camilo A. González, Andres Cely, Jossie Murcia

Abstract:

Quantification of risks associated with complications development from chronic kidney disease (CKD) through accurate survival models can help with patient management. A retrospective cohort that included patients diagnosed with CKD from a primary care program and followed up between 2013 and 2018 was carried out. Time-dependent and static covariates associated with demographic, clinical, and laboratory factors were included. Deep Learning (DL) survival analyzes were developed for three CKD outcomes: CKD stage progression, >25% decrease in Estimated Glomerular Filtration Rate (eGFR), and Renal Replacement Therapy (RRT). Models were evaluated and compared with Random Survival Forest (RSF) based on concordance index (C-index) metric. 2.143 patients were included. Two models were developed for each outcome, Deep Neural Network (DNN) model reported C-index=0.9867 for CKD stage progression; C-index=0.9905 for reduction in eGFR; C-index=0.9867 for RRT. Regarding the RSF model, C-index=0.6650 was reached for CKD stage progression; decreased eGFR C-index=0.6759; RRT C-index=0.8926. DNN models applied in survival analysis context with considerations of longitudinal covariates at the start of follow-up can predict renal stage progression, a significant decrease in eGFR and RRT. The success of these survival models lies in the appropriate definition of survival times and the analysis of covariates, especially those that vary over time.

Keywords: artificial intelligence, chronic kidney disease, deep neural networks, survival analysis

Procedia PDF Downloads 134
1304 Framework for Enhancing Water Literacy and Sustainable Management in Southwest Nova Scotia

Authors: Etienne Mfoumou, Mo Shamma, Martin Tango, Michael Locke

Abstract:

Water literacy is essential for addressing emerging water management challenges in southwest Nova Scotia (SWNS), where growing concerns over water scarcity and sustainability have highlighted the need for improved educational frameworks. Current approaches often fail to fully represent the complexity of water systems, focusing narrowly on the water cycle while neglecting critical aspects such as groundwater infiltration and the interconnectedness of surface and subsurface water systems. To address these gaps, this paper proposes a comprehensive framework for water literacy that integrates the physical dimensions of water systems with key aspects of understanding, including processes, energy, scale, and human dependency. Moreover, a suggested tool to enhance this framework is a real-time hydrometric data map supported by a network of water level monitoring devices deployed across the province. These devices, particularly for monitoring dug wells, would provide critical data on groundwater levels and trends, offering stakeholders actionable insights into water availability and sustainability. This real-time data would facilitate deeper understanding and engagement with local water issues, complementing the educational framework and empowering stakeholders to make informed decisions. By integrating this tool, the proposed framework offers a practical, interdisciplinary approach to improving water literacy and promoting sustainable water management in SWNS.

Keywords: water education, water literacy, water management, water systems, Southwest Nova Scotia

Procedia PDF Downloads 32
1303 Aluminum Based Hexaferrite and Reduced Graphene Oxide a Suitable Microwave Absorber for Microwave Application

Authors: Sanghamitra Acharya, Suwarna Datar

Abstract:

Extensive use of digital and smart communication createsprolong expose of unwanted electromagnetic (EM) radiations. This harmful radiation creates not only malfunctioning of nearby electronic gadgets but also severely affects a human being. So, a suitable microwave absorbing material (MAM) becomes a necessary urge in the field of stealth and radar technology. Initially, Aluminum based hexa ferrite was prepared by sol-gel technique and for carbon derived composite was prepared by the simple one port chemical reduction method. Finally, composite films of Poly (Vinylidene) Fluoride (PVDF) are prepared by simple gel casting technique. Present work demands that aluminum-based hexaferrite phase conjugated with graphene in PVDF matrix becomes a suitable candidate both in commercially important X and Ku band. The structural and morphological nature was characterized by X-Ray diffraction (XRD), Field emission-scanning electron microscope (FESEM) and Raman spectra which conforms that 30-40 nm particles are well decorated over graphene sheet. Magnetic force microscopy (MFM) and conducting force microscopy (CFM) study further conforms the magnetic and conducting nature of composite. Finally, shielding effectiveness (SE) of the composite film was studied by using Vector network analyzer (VNA) both in X band and Ku band frequency range and found to be more than 30 dB and 40 dB, respectively. As prepared composite films are excellent microwave absorbers.

Keywords: carbon nanocomposite, microwave absorbing material, electromagnetic shielding, hexaferrite

Procedia PDF Downloads 178
1302 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations

Authors: Yehjune Heo

Abstract:

Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.

Keywords: anti-spoofing, CNN, fingerprint recognition, GAN

Procedia PDF Downloads 184
1301 The Misuse of Social Media in Order to Exploit "Generation Y"; The Tactics of IS

Authors: Ali Riza Perçin, Eser Bingül

Abstract:

Internet technologies have created opportunities with which people share their ideologies, thoughts and products. This virtual world, named social media has given the chance of gathering individual users and people from the world's remote locations and establishing an interaction between them. However, to an increasingly higher degree terrorist organizations today use the internet and most notably social-network media to create the effects they desire through a series of on-line activities. These activities, designed to support their activities, include information collection (intelligence), target selection, propaganda, fundraising and recruitment to name a few. Meanwhile, these have been used as the most important tool for recruitment especially from the different region of the world, especially disenfranchised youth, in the West in order to mobilize support and recruit “foreign fighters.” The recruits have obtained the statue, which is not accessible in their society and have preferred the style of life that is offered by the terrorist organizations instead of their current life. Like other terrorist groups, for a while now the terrorist organization Islamic State (IS) in Iraq and Syria has employed a social-media strategy in order to advance their strategic objectives. At the moment, however, IS seems to be more successful in their on-line activities than other similar organizations. IS uses social media strategically as part of its armed activities and for the sustainability of their military presence in Syria and Iraq. In this context, “Generation Y”, which could exist at the critical position and undertake active role, has been examined. Additionally, the explained characteristics of “Generation Y” have been put forward and the duties of families and society have been stated as well.

Keywords: social media, "generation Y", terrorist organization, islamic state IS

Procedia PDF Downloads 426
1300 Caregiver Training Results in Accurate Reporting of Stool Frequency

Authors: Matthew Heidman, Susan Dallabrida, Analice Costa

Abstract:

Background:Accuracy of caregiver reported outcomes is essential for infant growth and tolerability study success. Crying/fussiness, stool consistencies, and other gastrointestinal characteristics are important parameters regarding tolerability, and inter-caregiver reporting can see a significant amount of subjectivity and vary greatly within a study, compromising data. This study sought to elucidate how caregiver reported questions related to stool frequency are answered before and after a short amount of training and how training impacts caregivers’ understanding, and how they would answer the question. Methods:A digital survey was issued for 90 daysin the US (n=121) and 30 days in Mexico (n=88), targeting respondents with children ≤4 years of age. Respondents were asked a question in two formats, first without a line of training text and second with a line of training text. The question set was as follows, “If your baby had stool in his/her diaper and you changed the diaper and 10 min later there was more stool in the diaper, how many stools would you report this as?” followed by the same question beginning with “If you were given the instruction that IF there are at least 5 minutes in between stools, then it counts as two (2) stools…”.Four response items were provided for both questions, 1) 2 stools, 2) 1stool, 3) it depends on how much stool was in the first versus the second diaper, 4) There is not enough information to be able to answer the question. Response frequencies between questions were compared. Results: Responses to the question without training saw some variability in the US, with 69% selecting “2 stools”,11% selecting “1 stool”, 14% selecting “it depends on how much stool was in the first versus the second diaper”, and 7% selecting “There is not enough information to be able to answer the question” and in Mexico respondents selected 9%, 78%, 13%, and 0% respectively. However, responses to the question after training saw more consolidation in the US, with 85% of respondents selecting“2 stools,” representing an increase in those selecting the correct answer. Additionally in Mexico, with 84% of respondents selecting “1 episode” representing an increase in the those selecting the correct response. Conclusions: Caregiver reported outcomes are critical for infant growth and tolerability studies, however, they can be highly subjective and see a high variability of responses without guidance. Training is critical to standardize all caregivers’ perspective regarding how to answer questions accurately in order to provide an accurate dataset.

Keywords: infant nutrition, clinical trial optimization, stool reporting, decentralized clinical trials

Procedia PDF Downloads 96
1299 Preparation of Indium Tin Oxide Nanoparticle-Modified 3-Aminopropyltrimethoxysilane-Functionalized Indium Tin Oxide Electrode for Electrochemical Sulfide Detection

Authors: Md. Abdul Aziz

Abstract:

Sulfide ion is water soluble, highly corrosive, toxic and harmful to the human beings. As a result, knowing the exact concentration of sulfide in water is very important. However, the existing detection and quantification methods have several shortcomings, such as high cost, low sensitivity, and massive instrumentation. Consequently, the development of novel sulfide sensor is relevant. Nevertheless, electrochemical methods gained enormous popularity due to a vast improvement in the technique and instrumentation, portability, low cost, rapid analysis and simplicity of design. Successful field application of electrochemical devices still requires vast improvement, which depends on the physical, chemical and electrochemical aspects of the working electrode. The working electrode made of bulk gold (Au) and platinum (Pt) are quite common, being very robust and endowed with good electrocatalytic properties. High cost, and electrode poisoning, however, have so far hindered their practical application in many industries. To overcome these obstacles, we developed a sulfide sensor based on an indium tin oxide nanoparticle (ITONP)-modified ITO electrode. To prepare ITONP-modified ITO, various methods were tested. Drop-drying of ITONPs (aq.) on aminopropyltrimethoxysilane-functionalized ITO (APTMS/ITO) was found to be the best method on the basis of voltammetric analysis of the sulfide ion. ITONP-modified APTMS/ITO (ITONP/APTMS/ITO) yielded much better electrocatalytic properties toward sulfide electro-οxidation than did bare or APTMS/ITO electrodes. The ITONPs and ITONP-modified ITO were also characterized using transmission electron microscopy and field emission scanning electron microscopy, respectively. Optimization of the type of inert electrolyte and pH yielded an ITONP/APTMS/ITO detector whose amperometrically and chronocoulοmetrically determined limits of detection for sulfide in aqueous solution were 3.0 µM and 0.90 µM, respectively. ITONP/APTMS/ITO electrodes which displayed reproducible performances were highly stable and were not susceptible to interference by common contaminants. Thus, the developed electrode can be considered as a promising tool for sensing sulfide.

Keywords: amperometry, chronocoulometry, electrocatalytic properties, ITO-nanoparticle-modified ITO, sulfide sensor

Procedia PDF Downloads 131
1298 An Application of Path Planning Algorithms for Autonomous Inspection of Buried Pipes with Swarm Robots

Authors: Richard Molyneux, Christopher Parrott, Kirill Horoshenkov

Abstract:

This paper aims to demonstrate how various algorithms can be implemented within swarms of autonomous robots to provide continuous inspection within underground pipeline networks. Current methods of fault detection within pipes are costly, time consuming and inefficient. As such, solutions tend toward a more reactive approach, repairing faults, as opposed to proactively seeking leaks and blockages. The paper presents an efficient inspection method, showing that autonomous swarm robotics is a viable way of monitoring underground infrastructure. Tailored adaptations of various Vehicle Routing Problems (VRP) and path-planning algorithms provide a customised inspection procedure for complicated networks of underground pipes. The performance of multiple algorithms is compared to determine their effectiveness and feasibility. Notable inspirations come from ant colonies and stigmergy, graph theory, the k-Chinese Postman Problem ( -CPP) and traffic theory. Unlike most swarm behaviours which rely on fast communication between agents, underground pipe networks are a highly challenging communication environment with extremely limited communication ranges. This is due to the extreme variability in the pipe conditions and relatively high attenuation of acoustic and radio waves with which robots would usually communicate. This paper illustrates how to optimise the inspection process and how to increase the frequency with which the robots pass each other, without compromising the routes they are able to take to cover the whole network.

Keywords: autonomous inspection, buried pipes, stigmergy, swarm intelligence, vehicle routing problem

Procedia PDF Downloads 166
1297 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification

Procedia PDF Downloads 156
1296 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design

Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian

Abstract:

Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.

Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.

Procedia PDF Downloads 283
1295 A Program Evaluation of TALMA Full-Year Fellowship Teacher Preparation

Authors: Emilee M. Cruz

Abstract:

Teachers take part in short-term teaching fellowships abroad, and their preparation before, during, and after the experience is critical to affecting teachers’ feelings of success in the international classroom. A program evaluation of the teacher preparation within TALMA: The Israel Program for Excellence in English (TALMA) full-year teaching fellowship was conducted. A questionnaire was developed that examined professional development, deliberate reflection, and cultural and language immersion offered before, during, and after the short-term experience. The evaluation also surveyed teachers’ feelings of preparedness for the Israeli classroom and any recommendations they had for future teacher preparation within the fellowship program. The review suggests the TALMA program includes integrated professional learning communities between fellows and Israeli co-teachers, more opportunities for immersive Hebrew language learning, a broader professional network with Israelis, and opportunities for guided discussion with the TALMA community continued participation in TALMA events and learning following the full-year fellowship. Similar short-term international programs should consider the findings in the design of their participation preparation programs. The review also offers direction for future program evaluation of short-term participant preparation, including the need for frequent response item updates to match current offerings and evaluation of participant feelings of preparedness before, during, and after the full-year fellowship.

Keywords: educational program evaluation, international teaching, short-term teaching, teacher beliefs, teaching fellowship, teacher preparation

Procedia PDF Downloads 182
1294 SIP Flooding Attacks Detection and Prevention Using Shannon, Renyi and Tsallis Entropy

Authors: Neda Seyyedi, Reza Berangi

Abstract:

Voice over IP (VOIP) network, also known as Internet telephony, is growing increasingly having occupied a large part of the communications market. With the growth of each technology, the related security issues become of particular importance. Taking advantage of this technology in different environments with numerous features put at our disposal, there arises an increasing need to address the security threats. Being IP-based and playing a signaling role in VOIP networks, Session Initiation Protocol (SIP) lets the invaders use weaknesses of the protocol to disable VOIP service. One of the most important threats is denial of service attack, a branch of which in this article we have discussed as flooding attacks. These attacks make server resources wasted and deprive it from delivering service to authorized users. Distributed denial of service attacks and attacks with a low rate can mislead many attack detection mechanisms. In this paper, we introduce a mechanism which not only detects distributed denial of service attacks and low rate attacks, but can also identify the attackers accurately. We detect and prevent flooding attacks in SIP protocol using Shannon (FDP-S), Renyi (FDP-R) and Tsallis (FDP-T) entropy. We conducted an experiment to compare the percentage of detection and rate of false alarm messages using any of the Shannon, Renyi and Tsallis entropy as a measure of disorder. Implementation results show that, according to the parametric nature of the Renyi and Tsallis entropy, by changing the parameters, different detection percentages and false alarm rates will be gained with the possibility to adjust the sensitivity of the detection mechanism.

Keywords: VOIP networks, flooding attacks, entropy, computer networks

Procedia PDF Downloads 406
1293 Fire Safety Assessment of At-Risk Groups

Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen

Abstract:

Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.

Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty

Procedia PDF Downloads 103
1292 Internet of Things for Smart Dedicated Outdoor Air System in Buildings

Authors: Dararat Tongdee, Surapong Chirarattananon, Somchai Maneewan, Chantana Punlek

Abstract:

Recently, the Internet of Things (IoT) is the important technology that connects devices to the network and people can access real-time communication. This technology is used to report, collect, and analyze the big data for achieving a purpose. For a smart building, there are many IoT technologies that enable management and building operators to improve occupant thermal comfort, indoor air quality, and building energy efficiency. In this research, we propose monitoring and controlling performance of a smart dedicated outdoor air system (SDOAS) based on IoT platform. The SDOAS was specifically designed with the desiccant unit and thermoelectric module. The designed system was intended to monitor, notify, and control indoor environmental factors such as temperature, humidity, and carbon dioxide (CO₂) level. The SDOAS was tested under the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE 62.2) and indoor air quality standard. The system will notify the user by Blynk notification when the status of the building is uncomfortable or tolerable limits are reached according to the conditions that were set. The user can then control the system via a Blynk application on a smartphone. The experimental result indicates that the temperature and humidity of indoor fresh air in the comfort zone are approximately 26 degree Celsius and 58% respectively. Furthermore, the CO₂ level was controlled lower than 1000 ppm by indoor air quality standard condition. Therefore, the proposed system can efficiently work and be easy to use for buildings.

Keywords: internet of things, indoor air quality, smart dedicated outdoor air system, thermal comfort

Procedia PDF Downloads 199
1291 The Prevalence and Impact of Anxiety Among Medical Students in the MENA Region: A Systematic Review, Meta-Analysis, and Meta-Regression

Authors: Kawthar F. Albasri, Abdullah M. AlHudaithi, Dana B. AlTurairi, Abdullaziz S. AlQuraini, Adoub Y. AlDerazi, Reem A. Hubail, Haitham A. Jahrami

Abstract:

Several studies have found that medical students have a significant prevalence of anxiety. The purpose of this review paper is to carefully evaluate the current research on anxiety among medical students in the MENA region and, as a result, estimate the prevalence of these disturbances. Multiple databases, including the CINAHL (Cumulative Index to Nursing and Allied Health Literature), Cochrane Library, Embase, MEDLINE (Medical Literature Analysis and Retrieval System Online), PubMed, PsycINFO (Psychological Information Database), Scopus, Web of Science, UpToDate, ClinicalTrials.gov, WHO Global Health Library, EbscoHost, ProQuest, JAMA Network, and ScienceDirect, were searched. The retrieved article reference lists were rigorously searched and rated for quality. A random effects meta-analysis was performed to compute estimates. The current meta-analysis revealed an alarming estimated pooled prevalence of anxiety (K = 46, N = 27023) of 52.5% [95%CI: 43.3%–61.6%]. A total of 62.0% [95% CI 42.9%; 78.0%] of the students (K = 18, N = 16466) suffered from anxiety during the COVID-19 pandemic, while 52.5% [95% CI 43.3%; 61.6%] had anxiety before COVID-19. Based on the GAD-7 measure, a total of 55.7% [95%CI 30.5%; 78.3%] of the students (K = 10, N = 5830) had anxiety, and a total of 54.7% of the students (K = 18, N = 12154) [95%CI 42.8%; 66.0%] had anxiety using the DASS-21 or 42 measure. Anxiety is a common issue among medical students, making it a genuine problem. Further research should be conducted post-COVD 19, with a focus on anxiety prevention and intervention initiatives for medical students.

Keywords: anxiety, medical students, MENA, meta-analysis, prevalence

Procedia PDF Downloads 73
1290 Optimized Techniques for Reducing the Reactive Power Generation in Offshore Wind Farms in India

Authors: Pardhasaradhi Gudla, Imanual A.

Abstract:

The generated electrical power in offshore needs to be transmitted to grid which is located in onshore by using subsea cables. Long subsea cables produce reactive power, which should be compensated in order to limit transmission losses, to optimize the transmission capacity, and to keep the grid voltage within the safe operational limits. Installation cost of wind farm includes the structure design cost and electrical system cost. India has targeted to achieve 175GW of renewable energy capacity by 2022 including offshore wind power generation. Due to sea depth is more in India, the installation cost will be further high when compared to European countries where offshore wind energy is already generating successfully. So innovations are required to reduce the offshore wind power project cost. This paper presents the optimized techniques to reduce the installation cost of offshore wind firm with respect to electrical transmission systems. This technical paper provides the techniques for increasing the current carrying capacity of subsea cable by decreasing the reactive power generation (capacitance effect) of the subsea cable. There are many methods for reactive power compensation in wind power plants so far in execution. The main reason for the need of reactive power compensation is capacitance effect of subsea cable. So if we diminish the cable capacitance of cable then the requirement of the reactive power compensation will be reduced or optimized by avoiding the intermediate substation at midpoint of the transmission network.

Keywords: offshore wind power, optimized techniques, power system, sub sea cable

Procedia PDF Downloads 195
1289 Emergence of Information Centric Networking and Web Content Mining: A Future Efficient Internet Architecture

Authors: Sajjad Akbar, Rabia Bashir

Abstract:

With the growth of the number of users, the Internet usage has evolved. Due to its key design principle, there is an incredible expansion in its size. This tremendous growth of the Internet has brought new applications (mobile video and cloud computing) as well as new user’s requirements i.e. content distribution environment, mobility, ubiquity, security and trust etc. The users are more interested in contents rather than their communicating peer nodes. The current Internet architecture is a host-centric networking approach, which is not suitable for the specific type of applications. With the growing use of multiple interactive applications, the host centric approach is considered to be less efficient as it depends on the physical location, for this, Information Centric Networking (ICN) is considered as the potential future Internet architecture. It is an approach that introduces uniquely named data as a core Internet principle. It uses the receiver oriented approach rather than sender oriented. It introduces the naming base information system at the network layer. Although ICN is considered as future Internet architecture but there are lot of criticism on it which mainly concerns that how ICN will manage the most relevant content. For this Web Content Mining(WCM) approaches can help in appropriate data management of ICN. To address this issue, this paper contributes by (i) discussing multiple ICN approaches (ii) analyzing different Web Content Mining approaches (iii) creating a new Internet architecture by merging ICN and WCM to solve the data management issues of ICN. From ICN, Content-Centric Networking (CCN) is selected for the new architecture, whereas, Agent-based approach from Web Content Mining is selected to find most appropriate data.

Keywords: agent based web content mining, content centric networking, information centric networking

Procedia PDF Downloads 475
1288 Digital Transformation: Actionable Insights to Optimize the Building Performance

Authors: Jovian Cheung, Thomas Kwok, Victor Wong

Abstract:

Buildings are entwined with smart city developments. Building performance relies heavily on electrical and mechanical (E&M) systems and services accounting for about 40 percent of global energy use. By cohering the advancement of technology as well as energy and operation-efficient initiatives into the buildings, people are enabled to raise building performance and enhance the sustainability of the built environment in their daily lives. Digital transformation in the buildings is the profound development of the city to leverage the changes and opportunities of digital technologies To optimize the building performance, intelligent power quality and energy management system is developed for transforming data into actions. The system is formed by interfacing and integrating legacy metering and internet of things technologies in the building and applying big data techniques. It provides operation and energy profile and actionable insights of a building, which enables to optimize the building performance through raising people awareness on E&M services and energy consumption, predicting the operation of E&M systems, benchmarking the building performance, and prioritizing assets and energy management opportunities. The intelligent power quality and energy management system comprises four elements, namely the Integrated Building Performance Map, Building Performance Dashboard, Power Quality Analysis, and Energy Performance Analysis. It provides predictive operation sequence of E&M systems response to the built environment and building activities. The system collects the live operating conditions of E&M systems over time to identify abnormal system performance, predict failure trends and alert users before anticipating system failure. The actionable insights collected can also be used for system design enhancement in future. This paper will illustrate how intelligent power quality and energy management system provides operation and energy profile to optimize the building performance and actionable insights to revitalize an existing building into a smart building. The system is driving building performance optimization and supporting in developing Hong Kong into a suitable smart city to be admired.

Keywords: intelligent buildings, internet of things technologies, big data analytics, predictive operation and maintenance, building performance

Procedia PDF Downloads 158
1287 Vascularized Adipose Tissue Engineering by Using Adipose ECM/Fibroin Hydrogel

Authors: Alisan Kayabolen, Dilek Keskin, Ferit Avcu, Andac Aykan, Fatih Zor, Aysen Tezcaner

Abstract:

Adipose tissue engineering is a promising field for regeneration of soft tissue defects. However, only very thin implants can be used in vivo since vascularization is still a problem for thick implants. Another problem is finding a biocompatible scaffold with good mechanical properties. In this study, the aim is to develop a thick vascularized adipose tissue that will integrate with the host, and perform its in vitro and in vivo characterizations. For this purpose, a hydrogel of decellularized adipose tissue (DAT) and fibroin was produced, and both endothelial cells and adipocytes that were differentiated from adipose derived stem cells were encapsulated in this hydrogel. Mixing DAT with fibroin allowed rapid gel formation by vortexing. It also provided to adjust mechanical strength by changing fibroin to DAT ratio. Based on compression tests, gels of DAT/fibroin ratio with similar mechanical properties to adipose tissue was selected for cell culture experiments. In vitro characterizations showed that DAT is not cytotoxic; on the contrary, it has many natural ECM components which provide biocompatibility and bioactivity. Subcutaneous implantation of hydrogels resulted with no immunogenic reaction or infection. Moreover, localized empty hydrogels gelled successfully around host vessel with required shape. Implantations of cell encapsulated hydrogels and histological analyses are under study. It is expected that endothelial cells inside the hydrogel will form a capillary network and they will bind to the host vessel passing through hydrogel.

Keywords: adipose tissue engineering, decellularization, encapsulation, hydrogel, vascularization

Procedia PDF Downloads 529
1286 Binderless Naturally-extracted Metal-free Electrocatalyst for Efficient NOₓ Reduction

Authors: Hafiz Muhammad Adeel Sharif, Tian Li, Changping Li

Abstract:

Recently, the emission of nitrogen-sulphur oxides (NOₓ, SO₂) has become a global issue and causing serious threats to health and the environment. Catalytic reduction of NOx and SOₓ gases into friendly gases is considered one of the best approaches. However, regeneration of the catalyst, higher bond-dissociation energy for NOx, i.e., 150.7 kcal/mol, escape of intermediate gas (N₂O, a greenhouse gas) with treated flue-gas, and limited activity of catalyst remains a great challenge. Here, a cheap, binderless naturally-extracted bass-wood thin carbon electrode (TCE) is presented, which shows excellent catalytic activity towards NOx reduction. The bass-wood carbonization at 900 ℃ followed by thermal activation in the presence of CO2 gas at 750 ℃. The thermal activation resulted in an increase in epoxy groups on the surface of the TCE and enhancement in the surface area as well as the degree of graphitization. The TCE unique 3D strongly inter-connected network through hierarchical micro/meso/macro pores that allow large electrode/electrolyte interface. Owing to these characteristics, the TCE exhibited excellent catalytic efficiency towards NOx (~83.3%) under ambient conditions and enhanced catalytic response under pH and sulphite exposure as well as excellent stability up to 168 hours. Moreover, a temperature-dependent activity trend was found where the highest catalytic activity was achieved at 80 ℃, beyond which the electrolyte became evaporative and resulted in a performance decrease. The designed electrocatalyst showed great potential for effective NOx-reduction, which is highly cost-effective, green, and sustainable.

Keywords: electrocatalyst, NOx-reduction, bass-wood electrode, integrated wet-scrubbing, sustainable

Procedia PDF Downloads 77
1285 Linux Security Management: Research and Discussion on Problems Caused by Different Aspects

Authors: Ma Yuzhe, Burra Venkata Durga Kumar

Abstract:

The computer is a great invention. As people use computers more and more frequently, the demand for PCs is growing, and the performance of computer hardware is also rising to face more complex processing and operation. However, the operating system, which provides the soul for computers, has stopped developing at a stage. In the face of the high price of UNIX (Uniplexed Information and Computering System), batch after batch of personal computer owners can only give up. Disk Operating System is too simple and difficult to bring innovation into play, which is not a good choice. And MacOS is a special operating system for Apple computers, and it can not be widely used on personal computers. In this environment, Linux, based on the UNIX system, was born. Linux combines the advantages of the operating system and is composed of many microkernels, which is relatively powerful in the core architecture. Linux system supports all Internet protocols, so it has very good network functions. Linux supports multiple users. Each user has no influence on their own files. Linux can also multitask and run different programs independently at the same time. Linux is a completely open source operating system. Users can obtain and modify the source code for free. Because of these advantages of Linux, it has also attracted a large number of users and programmers. The Linux system is also constantly upgraded and improved. It has also issued many different versions, which are suitable for community use and commercial use. Linux system has good security because it relies on a file partition system. However, due to the constant updating of vulnerabilities and hazards, the using security of the operating system also needs to be paid more attention to. This article will focus on the analysis and discussion of Linux security issues.

Keywords: Linux, operating system, system management, security

Procedia PDF Downloads 109
1284 Stem Cell Fate Decision Depending on TiO2 Nanotubular Geometry

Authors: Jung Park, Anca Mazare, Klaus Von Der Mark, Patrik Schmuki

Abstract:

In clinical application of TiO2 implants on tooth and hip replacement, migration, adhesion and differentiation of neighboring mesenchymal stem cells onto implant surfaces are critical steps for successful bone regeneration. In a recent decade, accumulated attention has been paid on nanoscale electrochemical surface modifications on TiO2 layer for improving bone-TiO2 surface integration. We generated, on titanium surfaces, self-assembled layers of vertically oriented TiO2 nanotubes with defined diameters between 15 and 100 nm and here we show that mesenchymal stem cells finely sense TiO2 nanotubular geometry and quickly decide their cell fate either to differentiation into osteoblasts or to programmed cell death (apoptosis) on TiO2 nanotube layers. These cell fate decisions are critically dependent on nanotube size differences (15-100nm in diameters) of TiO2 nanotubes sensing by integrin clustering. We further demonstrate that nanoscale topography-sensing is feasible not only in mesenchymal stem cells but rather seems as generalized nanoscale microenvironment-cell interaction mechanism in several cell types composing bone tissue network including osteoblasts, osteoclast, endothelial cells and hematopoietic stem cells. Additionally we discuss the synergistic effect of simultaneous stimulation by nanotube-bound growth factor and nanoscale topographic cues on enhanced bone regeneration.

Keywords: TiO2 nanotube, stem cell fate decision, nano-scale microenvironment, bone regeneration

Procedia PDF Downloads 432
1283 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 157
1282 Subway Stray Current Effects on Gas Pipelines in the City of Tehran

Authors: Mohammad Derakhshani, Saeed Reza Allahkarama, Michael Isakhani-Zakaria, Masoud Samadian, Hojjat Sharifi Rasaey

Abstract:

In order to investigate the effects of stray current from DC traction systems (subway) on cathodically protected gas pipelines, the subway and the gas network maps in the city of Tehran were superimposed and a comprehensive map was prepared. 213 intersections and about 100150 meters of parallel sections of gas pipelines were found with respect to the railway right of way which was specified for field measurements. The potential measurements data were logged for one hour in each test point. 24-hour potential monitoring was carried out in selected test points as well. Results showed that dynamic stray current from subway on pipeline potential appears as fluctuations in its static potential that is visible in the diagrams during night periods. These fluctuations can cause the pipeline potential to exit the safe zone and lead to corrosion or overprotection. In this study, a maximum potential shift of 100 mv in the pipe-to-soil potential was considered as a criterion for dynamic stray current effective presence. Results showed that a potential fluctuation range between 100 mV to 3 V exists in measured points on pipelines which exceeds the proposed criterion and needs to be investigated. Corrosion rates influenced by stray currents were calculated using coupons. Results showed that coupon linked to the pipeline in one of the locations at region 1 of the city of Tehran has a corrosion rate of 4.2 mpy (with cathodic protection and under influence of stray currents) which is about 1.5 times more than free corrosion rate of 2.6 mpy.

Keywords: stray current, DC traction, subway, buried Pipelines, cathodic protection list

Procedia PDF Downloads 822
1281 Monitoring Synthesis of Biodiesel through Online Density Measurements

Authors: Arnaldo G. de Oliveira, Jr, Matthieu Tubino

Abstract:

The transesterification process of triglycerides with alcohols that occurs during the biodiesel synthesis causes continuous changes in several physical properties of the reaction mixture, such as refractive index, viscosity and density. Amongst them, density can be an useful parameter to monitor the reaction, in order to predict the composition of the reacting mixture and to verify the conversion of the oil into biodiesel. In this context, a system was constructed in order to continuously determine changes in the density of the reacting mixture containing soybean oil, methanol and sodium methoxide (30 % w/w solution in methanol), stirred at 620 rpm at room temperature (about 27 °C). A polyethylene pipe network connected to a peristaltic pump was used in order to collect the mixture and pump it through a coil fixed on the plate of an analytical balance. The collected mass values were used to trace a curve correlating the mass of the system to the reaction time. The density variation profile versus the time clearly shows three different steps: 1) the dispersion of methanol in oil causes a decrease in the system mass due to the lower alcohol density followed by stabilization; 2) the addition of the catalyst (sodium methoxide) causes a larger decrease in mass compared to the first step (dispersion of methanol in oil) because of the oil conversion into biodiesel; 3) the final stabilization, denoting the end of the reaction. This density variation profile provides information that was used to predict the composition of the mixture over the time and the reaction rate. The precise knowledge of the duration of the synthesis means saving time and resources on a scale production system. This kind of monitoring provides several interesting features such as continuous measurements without collecting aliquots.

Keywords: biodiesel, density measurements, online continuous monitoring, synthesis

Procedia PDF Downloads 576
1280 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework

Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin

Abstract:

During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.

Keywords: artificial intelligence, COVID-19, depression detection, psychiatric disorder

Procedia PDF Downloads 131
1279 Chemical Synthesis, Characterization and Dose Optimization of Chitosan-Based Nanoparticles of MCPA for Management of Broad-Leaved Weeds (Chenopodium album, Lathyrus aphaca, Angalis arvensis and Melilotus indica) of Wheat

Authors: Muhammad Ather Nadeem, Bilal Ahmad Khan, Tasawer Abbas

Abstract:

Nanoherbicides utilize nanotechnology to enhance the delivery of biological or chemical herbicides using combinations of nanomaterials. The aim of this research was to examine the efficacy of chitosan nanoparticles containing MCPA herbicide as a potential eco-friendly alternative for weed control in wheat crops. Scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), and ultraviolet absorbance were used to analyze the developed nanoparticles. The SEM analysis indicated that the average size of the particles was 35 nm, forming clusters with a porous structure. Both nanoparticles of fluroxyper + MCPA exhibited maximal absorption peaks at a wavelength of 320 nm. The compound fluroxyper +MCPA has a strong peak at a 2θ value of 30.55°, which correlates to the 78 plane of the anatase phase. The weeds, including Chenopodium album, Lathyrus aphaca, Angalis arvensis, and Melilotus indica, were sprayed with the nanoparticles while they were in the third or fourth leaf stage. There were seven distinct dosages used: doses (D0 (Check weeds), D1 (Recommended dose of traditional herbicide, D2 (Recommended dose of Nano-herbicide (NPs-H)), D3 (NPs-H with 05-fold lower dose), D4 ((NPs-H) with 10-fold lower dose), D5 (NPs-H with 15-fold lower dose), and D6 (NPs-H with 20-fold lower dose)). The chitosan-based nanoparticles of MCPA at the prescribed dosage of conventional herbicide resulted in complete death and visual damage, with a 100% fatality rate. The dosage that was 5-fold lower exhibited the lowest levels of plant height (3.95 cm), chlorophyll content (5.63%), dry biomass (0.10 g), and fresh biomass (0.33 g) in the broad-leaved weed of wheat. The herbicide nanoparticles, when used at a dosage 10-fold lower than that of conventional herbicides, had a comparable impact on the prescribed dosage. Nano-herbicides have the potential to improve the efficiency of standard herbicides by increasing stability and lowering toxicity.

Keywords: mortality, visual injury, chlorophyl contents, chitosan-based nanoparticles

Procedia PDF Downloads 65
1278 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration

Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith

Abstract:

Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.

Keywords: cycle consistency, deformable multimodal image registration, deep learning, GAN

Procedia PDF Downloads 131