Search results for: Network Time Protocol
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21713

Search results for: Network Time Protocol

17213 Characterization of the Intestinal Microbiota: A Signature in Fecal Samples from Patients with Irritable Bowel Syndrome

Authors: Mina Hojat Ansari, Kamran Bagheri Lankarani, Mohammad Reza Fattahi, Ali Reza Safarpour

Abstract:

Irritable bowel syndrome (IBS) is a common bowel disorder which is usually diagnosed through the abdominal pain, fecal irregularities and bloating. Alteration in the intestinal microbial composition is implicating to inflammatory and functional bowel disorders which is recently also noted as an IBS feature. Owing to the potential importance of microbiota implication in both efficiencies of the treatment and prevention of the diseases, we examined the association between the intestinal microbiota and different bowel patterns in a cohort of subjects with IBS and healthy controls. Fresh fecal samples were collected from a total of 50 subjects, 30 of whom met the Rome IV criteria for IBS and 20 Healthy control. Total DNA was extracted and library preparation was conducted following the standard protocol for small whole genome sequencing. The pooled libraries sequenced on an Illumina Nextseq platform with a 2 × 150 paired-end read length and obtained sequences were analyzed using several bioinformatics programs. The majority of sequences obtained in the current study assigned to bacteria. However, our finding highlighted the significant microbial taxa variation among the studied groups. The result, therefore, suggests a significant association of the microbiota with symptoms and bowel characteristics in patients with IBS. These alterations in fecal microbiota could be exploited as a biomarker for IBS or its subtypes and suggest the modification of the microbiota might be integrated into prevention and treatment strategies for IBS.

Keywords: irritable bowel syndrome, intestinal microbiota, small whole genome sequencing, fecal samples, Illumina

Procedia PDF Downloads 137
17212 Changes in Some Bioactive Content and Antioxidant Capacity of Different Brassica Herbals after Pretreatment and Herbal Infusion

Authors: Evren C. Eroglu, Ridvan Arslan

Abstract:

Over the course of herbal production, various pretreatments are performed and some of which have serious effect on the bioactive properties. Especially in the production of herbal tea from fresh herbals, it is considered that elapsed time from blending to last product may affect the bioactive properties and antioxidant contents. Herbal infusion is basically prepared by mixing herbs with hot water for 10-20 min. During the brewing of these herbs, it is supposed to be significant decrease in the antioxidant and phenolics content. The first aim of this study was to evaluate the changes of vitamin C (VitC), total phenolic content (TPC) and antioxidant contents (AO) of two brassica varieties (brussel sprouts and white head cabbage) with different holding time after blending. Second aim of this study was to understand the effect of herbal infusion on VitC, TPC and AO contents. In this study, fresh samples were subjected to 0-30 min holding time after blending. Then, samples was immediately taken to -80 °C and freeze drying process was performed. Herbal infusion was performed for 20 minutes. According to results, VitC contents in brussel sprouts was not changed significantly (p=0.12). However, there was a significant decreasing of VitC content in cabbage sample (p=0.034). 20 min of brewing caused a significant decrement in VitC of brussel sprouts by approximately 76% (1071 ppm dw), while decline in cabbage VitC content was 87% (531 ppm dw). AO and TPC values of unprocessed cabbage control sample (13791.87 ppm FeSO4·7H2O eq. dw and 5301.85 ppm gallic acid eq. dw) were higher than brussel sprouts control samples (11571.75 ppm FeSO4·7H2O dw and 5202.76 ppm, respectively). The change in AO and TPC of both brussel sprouts and cabbage samples were not statistically significant at the end of 30 minutes holding time (p=0.24 and p=0.38). After 20 minutes of brewing, AO content in brussel sprouts significantly decreased by 44% (p ˂0.05). Although, the decreasing of AO in white head cabbage was statistically important (p=0.034), decreasing was just 8%. TPC values were found to decrease by 54% in cabbage, while it was 35% in brussel sprouts after herbal infusion. It was observed that 30 min holding time had no statistically important effect on TPC values of both cabbage and brussel sprouts. As a conclusion, herbal infusion has more or less effect on VitC, TPC and AO contents of samples. Therefore, it is important to decrease brewing time. Another result was that there were no significant differences in TPC and AO content of both samples when holding samples 30 min outside after blending. However, this process had significant effect on VitC content of white head cabbage.

Keywords: Antioxidant content, brussel sprouts, herbal infusion, total phenolic content, white head cabbage, vitamin c

Procedia PDF Downloads 114
17211 The Canaanite Trade Network between the Shores of the Mediterranean Sea

Authors: Doaa El-Shereef

Abstract:

The Canaanite civilization was one of the early great civilizations of the Near East, they influenced and been influenced from the civilizations of the ancient world especially the Egyptian and Mesopotamia civilizations. The development of the Canaanite trade started from the Chalcolithic Age to the Iron Age through the oldest trade route in the Middle East. This paper will focus on defining the Canaanites and from where did they come from and the meaning of the term Canaan and how the Ancient Manuscripts define the borders of the land of Canaan and this essay will describe the Canaanite trade route and their exported goods such as cedar wood, and pottery.

Keywords: archaeology, bronze age, Canaanite, colonies, Massilia, pottery, shipwreck, vineyards

Procedia PDF Downloads 188
17210 A Feasibility Study on Producing Bio-Coal from Orange Peel Residue by Using Torrefaction

Authors: Huashan Tai, Chien-Hui Lung

Abstract:

Nowadays people use massive fossil fuels which not only cause environmental impacts and global climate change, but also cause the depletion of non-renewable energy such as coal and oil. Bioenergy is currently the most widely used renewable energy, and agricultural waste is one of the main raw materials for bioenergy. In this study, we use orange peel residue, which is easier to collect from agricultural waste to produce bio-coal by torrefaction. The orange peel residue (with 25 to 30% moisture) was treated by torrefaction, and the experiments were conducted with initial temperature at room temperature (approximately at 25° C), with heating rates of 10, 30, and 50°C / min, with terminal temperatures at 150, 200, 250, 300, 350℃, and with residence time of 10, 20, and 30 minutes. The results revealed that the heating value, ash content and energy densification ratio of the solid products after torrefaction are in direct proportion to terminal temperatures and residence time, and are inversely proportional to heating rates. The moisture content, solid mass yield, energy yield, and volumetric energy density of the solid products after torrefaction are inversely proportional to terminal temperatures and residence time, and are in direct proportion to heating rates. In conclusion, we found that the heating values of the solid products were 1.3 times higher than those of the raw orange peels before torrefaction, and the volumetric energy densities were increased by 1.45 times under operating parameters with terminal temperature at 250°C, residence time of 10 minutes, and heating rate of 10°C / min of torrefaction. The results indicated that the residue of orange peel treated by torrefaction improved its energy density and fuel properties, and became more suitable for bio-fuel applications.

Keywords: biomass energy, orange, torrefaction

Procedia PDF Downloads 271
17209 Requirements to Establish a Taxi Sharing System in an Urban Area

Authors: Morteza Ahmadpur, Ilgin Gokasar, Saman Ghaffarian

Abstract:

That Transportation system plays an important role in management of societies is an undeniable fact and it is one of the most challenging issues in human beings routine life. But by increasing the population in urban areas, the demand for transportation modes also increase. Accordingly, it is obvious that more flexible and dynamic transportation system is required to satisfy peoples’ requirements. Nowadays, there is significant increase in number of environmental issues all over the world which is because of human activities. New technological achievements bring new horizons for humans and so they changed the life style of humans in every aspect of their life and transportation is not an exception. By using new technology, societies can modernize their transportation system and increase the feasibility of their system. Real–time Taxi sharing systems is one of the novel and most modern systems all over the world. For establishing this kind of system in an urban area it is required to use the most advanced technologies in a transportation system. GPS navigation devices, computers and social networks are just some parts of this kind of system. Like carpooling, real-time taxi sharing is one of the best ways to better utilize the empty seats in most cars and taxis, thus decreasing energy consumption and transport costs. It can serve areas not covered by a public transit system and act as a transit feeder service. Taxi sharing is also capable of serving one-time trips, not only recurrent commute trips or scheduled trips. In this study, we describe the requirements and parameters that we need to establish a useful real-time ride sharing system for an urban area. The parameters and requirements of this study can be used in any urban area.

Keywords: transportation, intelligent transportation systems, ride-sharing, taxi sharing

Procedia PDF Downloads 405
17208 American Sign Language Recognition System

Authors: Rishabh Nagpal, Riya Uchagaonkar, Venkata Naga Narasimha Ashish Mernedi, Ahmed Hambaba

Abstract:

The rapid evolution of technology in the communication sector continually seeks to bridge the gap between different communities, notably between the deaf community and the hearing world. This project develops a comprehensive American Sign Language (ASL) recognition system, leveraging the advanced capabilities of convolutional neural networks (CNNs) and vision transformers (ViTs) to interpret and translate ASL in real-time. The primary objective of this system is to provide an effective communication tool that enables seamless interaction through accurate sign language interpretation. The architecture of the proposed system integrates dual networks -VGG16 for precise spatial feature extraction and vision transformers for contextual understanding of the sign language gestures. The system processes live input, extracting critical features through these sophisticated neural network models, and combines them to enhance gesture recognition accuracy. This integration facilitates a robust understanding of ASL by capturing detailed nuances and broader gesture dynamics. The system is evaluated through a series of tests that measure its efficiency and accuracy in real-world scenarios. Results indicate a high level of precision in recognizing diverse ASL signs, substantiating the potential of this technology in practical applications. Challenges such as enhancing the system’s ability to operate in varied environmental conditions and further expanding the dataset for training were identified and discussed. Future work will refine the model’s adaptability and incorporate haptic feedback to enhance the interactivity and richness of the user experience. This project demonstrates the feasibility of an advanced ASL recognition system and lays the groundwork for future innovations in assistive communication technologies.

Keywords: sign language, computer vision, vision transformer, VGG16, CNN

Procedia PDF Downloads 17
17207 Dental Ethics versus Malpractice, as Phenomenon with a Growing Trend

Authors: Saimir Heta, Kers Kapaj, Rialda Xhizdari, Ilma Robo

Abstract:

Dealing with emerging cases of dental malpractice with justifications that stem from the clear rules of dental ethics is a phenomenon with an increasing trend in today's dental practice. Dentists should clearly understand how far the limit of malpractice goes, with or without minimal or major consequences, for the affected patient, which can be justified as a complication of dental treatment, in support of the rules of dental ethics in the dental office. Indeed, malpractice can occur in cases of lack of professionalism, but it can also come as a consequence of anatomical and physiological limitations in the implementation of the dental protocols, predetermined and indicated by the patient in the paragraph of the treatment plan in his personal card. This study is of the review type with the aim of the latest findings published in the literature about the problem of dealing with these phenomena. The combination of keywords is done in such a way with the aim to give the necessary space for collecting the right information in the networks of publications about this field, always first from the point of view of the dentist and not from that of the lawyer or jurist. From the findings included in this article, it was noticed the diversity of approaches towards the phenomenon depends on the different countries based on the legal basis that these countries have. There is a lack of or a small number of articles that touch on this topic, and these articles are presented with a limited number of data on the same topic. Conclusions: Dental malpractice should not be hidden under the guise of various dental complications that we justify with the strict rules of ethics for patients treated in the dental chair. The individual experience of dental malpractice must be published with the aim of serving as a source of experience for future generations of dentists.

Keywords: dental ethics, malpractice, professional protocol, random deviation

Procedia PDF Downloads 71
17206 Predictive Analysis of the Stock Price Market Trends with Deep Learning

Authors: Suraj Mehrotra

Abstract:

The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.

Keywords: machine learning, testing set, artificial intelligence, stock analysis

Procedia PDF Downloads 76
17205 Prediction of Wind Speed by Artificial Neural Networks for Energy Application

Authors: S. Adjiri-Bailiche, S. M. Boudia, H. Daaou, S. Hadouche, A. Benzaoui

Abstract:

In this work the study of changes in the wind speed depending on the altitude is calculated and described by the model of the neural networks, the use of measured data, the speed and direction of wind, temperature and the humidity at 10 m are used as input data and as data targets at 50m above sea level. Comparing predict wind speeds and extrapolated at 50 m above sea level is performed. The results show that the prediction by the method of artificial neural networks is very accurate.

Keywords: MATLAB, neural network, power low, vertical extrapolation, wind energy, wind speed

Procedia PDF Downloads 666
17204 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text

Authors: Duncan Wallace, M-Tahar Kechadi

Abstract:

In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.

Keywords: artificial neural networks, data-mining, machine learning, medical informatics

Procedia PDF Downloads 110
17203 Artificial Neural Network Based Model for Detecting Attacks in Smart Grid Cloud

Authors: Sandeep Mehmi, Harsh Verma, A. L. Sangal

Abstract:

Ever since the idea of using computing services as commodity that can be delivered like other utilities e.g. electric and telephone has been floated, the scientific fraternity has diverted their research towards a new area called utility computing. New paradigms like cluster computing and grid computing came into existence while edging closer to utility computing. With the advent of internet the demand of anytime, anywhere access of the resources that could be provisioned dynamically as a service, gave rise to the next generation computing paradigm known as cloud computing. Today, cloud computing has become one of the most aggressively growing computer paradigm, resulting in growing rate of applications in area of IT outsourcing. Besides catering the computational and storage demands, cloud computing has economically benefitted almost all the fields, education, research, entertainment, medical, banking, military operations, weather forecasting, business and finance to name a few. Smart grid is another discipline that direly needs to be benefitted from the cloud computing advantages. Smart grid system is a new technology that has revolutionized the power sector by automating the transmission and distribution system and integration of smart devices. Cloud based smart grid can fulfill the storage requirement of unstructured and uncorrelated data generated by smart sensors as well as computational needs for self-healing, load balancing and demand response features. But, security issues such as confidentiality, integrity, availability, accountability and privacy need to be resolved for the development of smart grid cloud. In recent years, a number of intrusion prevention techniques have been proposed in the cloud, but hackers/intruders still manage to bypass the security of the cloud. Therefore, precise intrusion detection systems need to be developed in order to secure the critical information infrastructure like smart grid cloud. Considering the success of artificial neural networks in building robust intrusion detection, this research proposes an artificial neural network based model for detecting attacks in smart grid cloud.

Keywords: artificial neural networks, cloud computing, intrusion detection systems, security issues, smart grid

Procedia PDF Downloads 301
17202 Thermodynamic Attainable Region for Direct Synthesis of Dimethyl Ether from Synthesis Gas

Authors: Thulane Paepae, Tumisang Seodigeng

Abstract:

This paper demonstrates the use of a method of synthesizing process flowsheets using a graphical tool called the GH-plot and in particular, to look at how it can be used to compare the reactions of a combined simultaneous process with regard to their thermodynamics. The technique uses fundamental thermodynamic principles to allow the mass, energy and work balances locate the attainable region for chemical processes in a reactor. This provides guidance on what design decisions would be best suited to developing new processes that are more effective and make lower demands on raw material and energy usage.

Keywords: attainable regions, dimethyl ether, optimal reaction network, GH Space

Procedia PDF Downloads 222
17201 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 226
17200 HTML5 Online Learning Application with Offline Web, Location Based, Animated Web, Multithread, and Real-Time Features

Authors: Sheetal R. Jadhwani, Daisy Sang, Chang-Shyh Peng

Abstract:

Web applications are an integral part of modem life. They are mostly based upon the HyperText Markup Language (HTML). While HTML meets the basic needs, there are some shortcomings. For example, applications can cease to work once user goes offline, real-time updates may be lagging, and user interface can freeze on computationally intensive tasks. The latest language specification HTML5 attempts to rectify the situation with new tools and protocols. This paper studies the new Web Storage, Geolocation, Web Worker, Canvas, and Web Socket APIs, and presents applications to test their features and efficiencies.

Keywords: HTML5, web worker, canvas, web socket

Procedia PDF Downloads 281
17199 Fabrication of Miniature Gear of Hastelloy X by WEDM Process

Authors: Bhupinder Singh, Joy Prakash Misra

Abstract:

This article provides the information regarding machining of hastelloy-X on wire electro spark machining (WEDM). Experimental investigation has been carried out by varying pulse-on time (TON), pulse-off time (TOFF), peak current (IP) and spark gap voltage (SV). Effect of these parameters is studied on material removal rate (MRR). Experiments are designed as per box-behnken design (BBD) technique of response surface methodology (RSM). Analysis of variance (ANOVA) results indicates that TON, TOFF, IP, SV, TON x IP are significant parameters that influenced the MRR, and it is depicted that value of MRR is more at high discharge energy (HDE) and less at low discharge energy (LDE). Furthermore, miniature impeller and miniature gear (OD≤10MM) is fabricated by WEDM at optimized condition.

Keywords: advanced manufacturing, WEDM, super alloy, gear

Procedia PDF Downloads 210
17198 Revenue Management of Perishable Products Considering Freshness and Price Sensitive Customers

Authors: Onur Kaya, Halit Bayer

Abstract:

Global grocery and supermarket sales are among the largest markets in the world and perishable products such as fresh produce, dairy and meat constitute the biggest section of these markets. Due to their deterioration over time, the demand for these products depends highly on their freshness. They become totally obsolete after a certain amount of time causing a high amount of wastage and decreases in grocery profits. In addition, customers are asking for higher product variety in perishable product categories, leading to less predictable demand per product and to more out-dating. Effective management of these perishable products is an important issue since it is observed that billions of dollars’ worth of food is expired and wasted every month. We consider coordinated inventory and pricing decisions for perishable products with a time and price dependent random demand function. We use stochastic dynamic programming to model this system for both periodically-reviewed and continuously-reviewed inventory systems and prove certain structural characteristics of the optimal solution. We prove that the optimal ordering decision scenario has a monotone structure and the optimal price value decreases by time. However, the optimal price changes in a non-monotonic structure with respect to inventory size. We also analyze the effect of 1 different parameters on the optimal solution through numerical experiments. In addition, we analyze simple-to-implement heuristics, investigate their effectiveness and extract managerial insights. This study gives valuable insights about the management of perishable products in order to decrease wastage and increase profits.

Keywords: age-dependent demand, dynamic programming, perishable inventory, pricing

Procedia PDF Downloads 235
17197 Radar Track-based Classification of Birds and UAVs

Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo

Abstract:

In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).

Keywords: birds, classification, machine learning, UAVs

Procedia PDF Downloads 197
17196 Integrated Formulation of Project Scheduling and Material Procurement Considering Different Discount Options

Authors: Babak H. Tabrizi, Seyed Farid Ghaderi

Abstract:

On-time availability of materials in the construction sites plays an outstanding role in successful achievement of project’s deliverables. Thus, this paper has investigated formulation of project scheduling and material procurement at the same time, by a mixed-integer programming model, aiming to minimize/maximize penalty/reward to deliver the project and minimize material holding, ordering, and procurement costs, respectively. We have taken both all-units and incremental discount possibilities into consideration to address more flexibility from the procurement side with regard to real world conditions. Finally, the applicability and efficiency of the mathematical model is tested by different numerical examples.

Keywords: discount strategies, material purchasing, project planning, project scheduling

Procedia PDF Downloads 241
17195 Perfectly Keyless Commercial Vehicle

Authors: Shubha T., Latha H. K. E., Yogananth Karuppiah

Abstract:

Accessing and sharing automobiles will become much simpler thanks to the wide range of automotive use cases made possible by digital keys. This study aims to provide digital keys to car owners and drivers so they can lock or unlock their automobiles and start the engine using a smartphone or other Bluetooth low energy-enabled mobile device. Private automobile owners can digitally lend their car keys to family members or friends without having to physically meet them, possibly for a certain period of time. Owners of company automobile fleets can electronically distribute car keys to staff members, possibly granting access for a given day or length of time. Customers no longer need to physically pick up car keys at a rental desk because automobile owners can digitally transfer keys with them.

Keywords: NFC, BLE, CCC, digital key, OEM

Procedia PDF Downloads 128
17194 Towards Reliable Mobile Cloud Computing

Authors: Khaled Darwish, Islam El Madahh, Hoda Mohamed, Hadia El Hennawy

Abstract:

Cloud computing has been one of the fastest growing parts in IT industry mainly in the context of the future of the web where computing, communication, and storage services are main services provided for Internet users. Mobile Cloud Computing (MCC) is gaining stream which can be used to extend cloud computing functions, services and results to the world of future mobile applications and enables delivery of a large variety of cloud application to billions of smartphones and wearable devices. This paper describes reliability for MCC by determining the ability of a system or component to function correctly under stated conditions for a specified period of time to be able to deal with the estimation and management of high levels of lifetime engineering uncertainty and risks of failure. The assessment procedures consists of determine Mean Time between Failures (MTBF), Mean Time to Failure (MTTF), and availability percentages for main components in both cloud computing and MCC structures applied on single node OpenStack installation to analyze its performance with different settings governing the behavior of participants. Additionally, we presented several factors have a significant impact on rates of change overall cloud system reliability should be taken into account in order to deliver highly available cloud computing services for mobile consumers.

Keywords: cloud computing, mobile cloud computing, reliability, availability, OpenStack

Procedia PDF Downloads 379
17193 Aerodynamic Brake Study of Reducing Braking Distance for High-Speed Trains

Authors: Phatthara Surachon, Tosaphol Ratniyomchai, Thanatchai Kulworawanichpong

Abstract:

This paper presents an aerodynamic brake study of reducing braking distance for high-speed trains (HST) using aerodynamic brakes as inspiration from the applications on the commercial aircraft wings. In case of emergency, both braking distance and stopping time are longer than the usual situation. Therefore, the passenger safety and the HST driving control management are definitely obtained by reducing the time and distance of train braking during emergency situation. Due to the limited study and implementation of the aerodynamic brake in HST, the possibility in use and the effectiveness of the aerodynamic brake to the train dynamic movement during braking are analyzed and considered. Regarding the aircraft’s flaps that applied in the HST, the areas of the aerodynamic brake acted as an additional drag force during train braking are able to vary depending on the operating angle and the required dynamic braking force. The HST with a varying speed of 200 km/h to 350 km/h is taken as a case study of this paper. The results show that the stopping time and the brake distance are effectively reduced by the aerodynamic brakes. The mechanical brake and its maintenance are effectively getting this benefit by extending its lifetime for longer use.

Keywords: high-speed train, aerodynamic brake, brake distance, drag force

Procedia PDF Downloads 175
17192 SFO-ECRSEP: Sensor Field Optimızation Based Ecrsep For Heterogeneous WSNS

Authors: Gagandeep Singh

Abstract:

The sensor field optimization is a serious issue in WSNs and has been ignored by many researchers. As in numerous real-time sensing fields the sensor nodes on the corners i.e. on the segment boundaries will become lifeless early because no extraordinary safety is presented for them. Accordingly, in this research work the central objective is on the segment based optimization by separating the sensor field between advance and normal segments. The inspiration at the back this sensor field optimization is to extend the time spam when the first sensor node dies. For the reason that in normal sensor nodes which were exist on the borders may become lifeless early because the space among them and the base station is more so they consume more power so at last will become lifeless soon.

Keywords: WSNs, ECRSEP, SEP, field optimization, energy

Procedia PDF Downloads 279
17191 Time Pressure and Its Effect at Tactical Level of Disaster Management

Authors: Agoston Restas

Abstract:

Introduction: In case of managing disasters decision makers can face many times such a special situation where any pre-sign of the drastically change is missing therefore the improvised decision making can be required. The complexity, ambiguity, uncertainty or the volatility of the situation can require many times the improvisation as decision making. It can be taken at any level of the management (strategic, operational and tactical) but at tactical level the main reason of the improvisation is surely time pressure. It is certainly the biggest problem during the management. Methods: The author used different tools and methods to achieve his goals; one of them was the study of the relevant literature, the other one was his own experience as a firefighting manager. Other results come from two surveys that are referred to; one of them was an essay analysis, the second one was a word association test, specially created for the research. Results and discussion: This article proves that, in certain situations, the multi-criteria, evaluating decision-making processes simply cannot be used or only in a limited manner. However, it can be seen that managers, directors or commanders are many times in situations that simply cannot be ignored when making decisions which should be made in a short time. The functional background of decisions made in a short time, their mechanism, which is different from the conventional, was studied lately and this special decision procedure was given the name recognition-primed decision. In the article, author illustrates the limits of the possibilities of analytical decision-making, presents the general operating mechanism of recognition-primed decision-making, elaborates on its special model relevant to managers at tactical level, as well as explore and systemize the factors that facilitate (catalyze) the processes with an example with fire managers.

Keywords: decision making, disaster managers, recognition primed decision, model for making decisions in emergencies

Procedia PDF Downloads 239
17190 Influence of Various Disaster Scenarios Assumption to the Advance Creation of Wide-Area Evacuation Plan Confronting Natural Disasters

Authors: Nemat Mohammadi, Yuki Nakayama

Abstract:

After occurring Great East Japan earthquake and as a consequence the invasion of an extremely large Tsunami to the city, obligated many local governments to take into account certainly these kinds of issues. Poor preparation of local governments to deal with such kinds of disasters at that time and consequently lack of assistance delivery for local residents caused thousands of civilian casualties as well as billion dollars of economic damages. Those local governments who are responsible for governing such coastal areas, have to consider some countermeasures to deal with these natural disasters, prepare a comprehensive evacuation plan and contrive some feasible emergency plans for the purpose of victims’ reduction as much as possible. Under this evacuation plan, the local government should contemplate more about the traffic congestion during wide-area evacuation operation and estimate the minimum essential time to evacuate the whole city completely. This challenge will become more complicated for the government when the people who are affected by disasters are not only limited to the normal informed citizens but also some pregnant women, physically handicapped persons, old age citizens and foreigners or tourists who are not familiar with that conditions as well as local language are involved. The important issue to deal with this challenge is that how to inform these people to take a proper action right away noticing the Tsunami is coming. After overcoming this problem, next significant challenge is even more considerable. Next challenge is to evacuate the whole residents in a short period of time from the threated area to the safer shelters. In fact, most of the citizens will use their own vehicles to evacuate to the designed shelters and some of them will use the shuttle buses which are provided by local governments. The problem will arise when all residents want to escape from the threated area simultaneously and consequently creating a traffic jam on evacuation routes which will cause to prolong the evacuation time. Hence, this research mostly aims to calculate the minimum essential time to evacuate each region inside the threated area and find the evacuation start point for each region separately. This result will help the local government to visualize the situations and conditions during disasters and assist them to reduce the possible traffic jam on evacuation routes and consequently suggesting a comprehensive wide-area evacuation plan during natural disasters.

Keywords: BPR formula, disaster scenarios, evacuation completion time, wide-area evacuation

Procedia PDF Downloads 195
17189 Coupled Spacecraft Orbital and Attitude Modeling and Simulation in Multi-Complex Modes

Authors: Amr Abdel Azim Ali, G. A. Elsheikh, Moutaz Hegazy

Abstract:

This paper presents verification of a modeling and simulation for a Spacecraft (SC) attitude and orbit control system. Detailed formulation of coupled SC orbital and attitude equations of motion is performed in order to achieve accepted accuracy to meet the requirements of multitargets tracking and orbit correction complex modes. Correction of the target parameter based on the estimated state vector during shooting time to enhance pointing accuracy is considered. Time-optimal nonlinear feedback control technique was used in order to take full advantage of the maximum torques that the controller can deliver. This simulation provides options for visualizing SC trajectory and attitude in a 3D environment by including an interface with V-Realm Builder and VR Sink in Simulink/MATLAB. Verification data confirms the simulation results, ensuring that the model and the proposed control law can be used successfully for large and fast tracking and is robust enough to keep the pointing accuracy within the desired limits with considerable uncertainty in inertia and control torque.

Keywords: attitude and orbit control, time-optimal nonlinear feedback control, modeling and simulation, pointing accuracy, maximum torques

Procedia PDF Downloads 305
17188 Flexible Furniture in Urban Open Spaces: A Tool to Achieve Social Sustainability

Authors: Mahsa Ghafouri, Guita Farivarsadri

Abstract:

In urban open spaces, furniture plays a crucial role in meeting various needs of the users over time. Furniture consists of elements that not only can facilitate physical needs individually but also fulfill social, psychological, and cultural demands on an urban scale. Creating adjustable urban spaces and using flexible furniture can provide the possibility of using urban spaces for a wide range of uses and activities and allow the engagement of users with distinct abilities and limitations in these activities. Flexibility in urban furniture can be seen as designing a number of modular components that are movable, expandable, adjustable, and changeable to accommodate various functions. Although there is a great amount of research related to flexibility and its distinct insights into achieving spaces that can cope with changing demands, this fundamental issue is often neglected in the design of urban furniture. However, in the long term, to address changing public needs over time, it can be logical to bring this quality into the design process to make spaces that can be sustained for a long time. This study aims to first introduce diverse kinds of flexible furniture that can be designed for urban public spaces and then to realize how this flexible furniture can improve the quality of public open spaces and social interaction and make them more adaptable over time and, as a result, achieve social sustainability. This research is descriptive and is mainly based on an extensive literature review and the analysis and classification of existing examples around the world. This research tends to illustrate various kinds of approaches that can help designers create flexible furniture to enhance the sustainability and quality of urban open spaces and, in this way, act as a guide for urban designers in this respect.

Keywords: flexible furniture, flexible design, urban open spaces, adaptability, moveability, social sustainability

Procedia PDF Downloads 36
17187 Identification of Knee Dynamic Profiles in High Performance Athletes with the Use of Motion Tracking

Authors: G. Espriú-Pérez, F. A. Vargas-Oviedo, I. Zenteno-Aguirrezábal, M. D. Moya-Bencomo

Abstract:

One of the injuries with a higher incidence among university-level athletes in the North of Mexico is presented in the knee. This injury generates absenteeism in training and competitions for at least 8 weeks. There is no active quantitative methodology, or protocol, that directly contributes to the clinical evaluation performed by the medical personnel at the prevalence of knee injuries. The main objective is to contribute with a quantitative tool that allows further development of preventive and corrective measures to these injuries. The study analyzed 55 athletes for 6 weeks, belonging to the disciplines of basketball, volleyball, soccer and swimming. Using a motion capture system (Nexus®, Vicon®), a three-dimensional analysis was developed that allows the measurement of the range of movement of the joint. To focus on the performance of the lower limb, eleven different movements were chosen from the Functional Performance Test, Functional Movement Screen, and the Cincinnati Jump Test. The research identifies the profile of the natural movement of a healthy knee, with the use of medical guidance, and its differences between each sport. The data recovered by the single-leg crossover hop managed to differentiate the type of knee movement among athletes. A maximum difference of 60° of offset was found in the adduction movement between male and female athletes of the same discipline. The research also seeks to serve as a guideline for the implementation of protocols that help identify the recovery level of such injuries.

Keywords: Cincinnati jump test, functional movement screen, functional performance test, knee, motion capture system

Procedia PDF Downloads 114
17186 Efficacy of Opicapone and Levodopa with Different Levodopa Daily Doses in Parkinson’s Disease Patients with Early Motor Fluctuations: Findings from the Korean ADOPTION Study

Authors: Jee-Young Lee, Joaquim J. Ferreira, Hyeo-il Ma, José-Francisco Rocha, Beomseok Jeon

Abstract:

The effective management of wearing-off is a key driver of medication changes for patients with Parkinson’s disease (PD) treated with levodopa (L-DOPA). While L-DOPA is well tolerated and efficacious, its clinical utility over time is often limited by the development of complications such as dyskinesia. Still, common first-line option includes adjusting the daily L-DOPA dose followed by adjunctive therapies usually counting for the L-DOPA equivalent daily dose (LEDD). The LEDD conversion formulae are a tool used to compare the equivalence of anti-PD medications. The aim of this work is to compare the effects of opicapone (OPC) 50 mg, a catechol-O-methyltransferase (COMT) inhibitor, and an additional 100 mg dose of L-DOPA in reducing the off time in PD patients with early motor fluctuations receiving different daily L-DOPA doses. OPC was found to be well tolerated and efficacious in advanced PD population. This work utilized patients' home diary data from a 4-week Phase 2 pharmacokinetics clinical study. The Korean ADOPTION study randomized (1:1) patients with PD and early motor fluctuations treated with up to 600 mg of L-DOPA given 3–4 times daily. The main endpoint was change from baseline in off time in the subgroup of patients receiving 300–400 mg/day L-DOPA at baseline plus OPC 50 mg and in the subgroup receiving >300 mg/day L-DOPA at baseline plus an additional dose of L-DOPA 100 mg. Of the 86 patients included in this subgroup analysis, 39 received OPC 50 mg and 47 L-DOPA 100 mg. At baseline, both L-DOPA total daily dose and LEDD were lower in the L-DOPA 300–400 mg/day plus OPC 50 mg group than in the L-DOPA >300 mg/day plus L-DOPA 100 mg. However, at Week 4, LEDD was similar between the two groups. The mean (±standard error) reduction in off time was approximately three-fold greater for the OPC 50 mg than for the L-DOPA 100 mg group, being -63.0 (14.6) minutes for patients treated with L-DOPA 300–400 mg/day plus OPC 50 mg, and -22.1 (9.3) minutes for those receiving L-DOPA >300 mg/day plus L-DOPA 100 mg. In conclusion, despite similar LEDD, OPC demonstrated a significantly greater reduction in off time when compared to an additional 100 mg L-DOPA dose. The effect of OPC appears to be LEDD independent, suggesting that caution should be exercised when employing LEDD to guide treatment decisions as this does not take into account the timing of each dose, onset, duration of therapeutic effect and individual responsiveness. Additionally, OPC could be used for keeping the L-DOPA dose as low as possible for as long as possible to avoid the development of motor complications which are a significant source of disability.

Keywords: opicapone, levodopa, pharmacokinetics, off-time

Procedia PDF Downloads 43
17185 Performance of Heifer Camels (Camelus dromedarius) on Native Range Supplemented with Different Energy Levels

Authors: Shehu, B., Muhammad, B. F., Madigawa, I. L., H. A. Alkali

Abstract:

The study was conducted to assess heifer camel behavior and live weight changes on native range supplemented with different energy levels. A total of nine camels aged between 2 and 3 years were randomly allotted into three groups and supplemented with 3400, 3600 and 3800 Kcal and designated A, B and C, respectively. The data obtained was analyzed for variance in a Completely Randomized Design. The heifers utilized average of 371.70 min/day (64% of daylight time) browsing on native pasture and 2.30 min/day (6%) sand bathing. A significantly higher mean time was spent by heifers on browsing Leptadenia hastata (P<0.001), Dichrostachys cinerea (P<0.01), Acacia nilotica (P<0.001) and Ziziphus spina-christi (P<0.05) in early dry season (January). No significant difference was recorded on browsing time on Tamarindus indica, Adansonia digitata, Piliostigma reticulatum, Parkia biglobosaand Azadirachta indica. No significant (P<0.05) liveweight change was recorded on she-camels due to the three energy levels. It was concluded that nutritive browse species in the study area could meet camel nutrient requirements including energy. Further research on effect of period on camel nutrients requirement in different physiological conditions is recommended.

Keywords: heifer, camel, grazing, pasture

Procedia PDF Downloads 529
17184 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver

Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto

Abstract:

The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.

Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC

Procedia PDF Downloads 117