Search results for: e-content producing algorithm
1391 Metamorphic Approach in Architecture Studio to Re-Imagine Drawings in Acknowledgement of Architectural/Artistic Identity
Authors: Hassan Wajid, Syed T. Ahmed, Syed G. Haider Jr., Razia Latif, Ahsan Ali, Maira Anam
Abstract:
The phenomenon of Metamorphosis can be associated with any object, organism, or structure gradually and progressively going through the change of systemic or morphological form. This phenomenon can be integrated while teaching drawing to architecture students. In architectural drawings, metamorphosis’s main focus and purpose are not to completely imitate any object. In the process of drawing, the changes in systemic or morphological form happen until the complete process, and the visuals of the complete process change the drawing, opening up possibilities for the imagination of the perceivers. Metamorphosis in architectural drawings begins with an initial form and, through various noticeable stages, ends up final form or manifestation. How much of the initial form is manifested in the final form and progressively among various intermediate stages becomes an indication of the nature of metamorphosis as a phenomenon. It is important at this stage to clarify that the term metamorphosis is presently being coopted from its original domain, usually in life sciences. In this current exercise, the architectural drawings are to act as an operative analog process transforming one image of art and/or architecture in its broadest sense. That composition is claimed to have come from one source (individual work, a cultural artifact, civilizational remain). It dialectically meets, opposes, or confronts some carefully chosen alien opposites from a different domain. As an example, the layers of a detailed drawing of a Turkish prayer rug of 5 x 7 ratio over a detailed architectural plan of a religious, historical complex can be observed such that the two drawings, though at markedly different scales could dialectically converse with one another and through their mutual congruencies. In the final stage, the idea concludes contradictions across the scales to initiate the analogous roles of metamorphosed third reality, which suggests the previous un-acknowledged architectural or artistic identity. The proposed paper explores the trajectory of reproduction by analyzing drawings through detailed drawing stages and analyzes challenges as well as opportunities in the discovered realm of imagination. This description further aims at identifying factors influencing creativity and innovation in producing architectural drawings through the process of observing drawings from inception to the concluding stage.Keywords: architectural drawings, metamorphosis, perceptions, discovery
Procedia PDF Downloads 1061390 Detect Cable Force of Cable Stayed Bridge from Accelerometer Data of SHM as Real Time
Authors: Nguyen Lan, Le Tan Kien, Nguyen Pham Gia Bao
Abstract:
The cable-stayed bridge belongs to the combined system, in which the cables is a major strutual element. Cable-stayed bridges with large spans are often arranged with structural health monitoring systems to collect data for bridge health diagnosis. Cables tension monitoring is a structural monitoring content. It is common to measure cable tension by a direct force sensor or cable vibration accelerometer sensor, thereby inferring the indirect cable tension through the cable vibration frequency. To translate cable-stayed vibration acceleration data to real-time tension requires some necessary calculations and programming. This paper introduces the algorithm, labview program that converts cable-stayed vibration acceleration data to real-time tension. The research results are applied to the monitoring system of Tran Thi Ly cable-stayed bridge and Song Hieu cable-stayed bridge in Vietnam.Keywords: cable-stayed bridge, cable fore, structural heath monitoring (SHM), fast fourie transformed (FFT), real time, vibrations
Procedia PDF Downloads 741389 Metrics and Methods for Improving Resilience in Agribusiness Supply Chains
Authors: Golnar Behzadi, Michael O'Sullivan, Tava Olsen, Abraham Zhang
Abstract:
By definition, increasing supply chain resilience improves the supply chain’s ability to return to normal, or to an even more desirable situation, quickly and efficiently after being hit by a disruption. This is especially critical in agribusiness supply chains where the products are perishable and have a short life-cycle. In this paper, we propose a resilience metric to capture and improve the recovery process in terms of both performance and time, of an agribusiness supply chain following either supply or demand-side disruption. We build a model that determines optimal supply chain recovery planning decisions and selects the best resilient strategies that minimize the loss of profit during the recovery time window. The model is formulated as a two-stage stochastic mixed-integer linear programming problem and solved with a branch-and-cut algorithm. The results show that the optimal recovery schedule is highly dependent on the duration of the time-window allowed for recovery. In addition, the profit loss during recovery is reduced by utilizing the proposed resilient actions.Keywords: agribusiness supply chain, recovery, resilience metric, risk management
Procedia PDF Downloads 3981388 An Intelligent Thermal-Aware Task Scheduler in Multiprocessor System on a Chip
Authors: Sina Saadati
Abstract:
Multiprocessors Systems-On-Chips (MPSOCs) are used widely on modern computers to execute sophisticated software and applications. These systems include different processors for distinct aims. Most of the proposed task schedulers attempt to improve energy consumption. In some schedulers, the processor's temperature is considered to increase the system's reliability and performance. In this research, we have proposed a new method for thermal-aware task scheduling which is based on an artificial neural network (ANN). This method enables us to consider a variety of factors in the scheduling process. Some factors like ambient temperature, season (which is important for some embedded systems), speed of the processor, computing type of tasks and have a complex relationship with the final temperature of the system. This Issue can be solved using a machine learning algorithm. Another point is that our solution makes the system intelligent So that It can be adaptive. We have also shown that the computational complexity of the proposed method is cheap. As a consequence, It is also suitable for battery-powered systems.Keywords: task scheduling, MOSOC, artificial neural network, machine learning, architecture of computers, artificial intelligence
Procedia PDF Downloads 1031387 Lowering Error Floors by Concatenation of Low-Density Parity-Check and Array Code
Authors: Cinna Soltanpur, Mohammad Ghamari, Behzad Momahed Heravi, Fatemeh Zare
Abstract:
Low-density parity-check (LDPC) codes have been shown to deliver capacity approaching performance; however, problematic graphical structures (e.g. trapping sets) in the Tanner graph of some LDPC codes can cause high error floors in bit-error-ratio (BER) performance under conventional sum-product algorithm (SPA). This paper presents a serial concatenation scheme to avoid the trapping sets and to lower the error floors of LDPC code. The outer code in the proposed concatenation is the LDPC, and the inner code is a high rate array code. This approach applies an interactive hybrid process between the BCJR decoding for the array code and the SPA for the LDPC code together with bit-pinning and bit-flipping techniques. Margulis code of size (2640, 1320) has been used for the simulation and it has been shown that the proposed concatenation and decoding scheme can considerably improve the error floor performance with minimal rate loss.Keywords: concatenated coding, low–density parity–check codes, array code, error floors
Procedia PDF Downloads 3571386 Partial Least Square Regression for High-Dimentional and High-Correlated Data
Authors: Mohammed Abdullah Alshahrani
Abstract:
The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data
Procedia PDF Downloads 511385 Relation between Physical and Mechanical Properties of Concrete Paving Stones Using Neuro-Fuzzy Approach
Authors: Erion Luga, Aksel Seitllari, Kemal Pervanqe
Abstract:
This study investigates the relation between physical and mechanical properties of concrete paving stones using neuro-fuzzy approach. For this purpose 200 samples of concrete paving stones were selected randomly from different sources. The first phase included the determination of physical properties of the samples such as water absorption capacity, porosity and unit weight. After that the indirect tensile strength test and compressive strength test of the samples were performed. İn the second phase, adaptive neuro-fuzzy approach was employed to simulate nonlinear mapping between the above mentioned physical properties and mechanical properties of paving stones. The neuro-fuzzy models uses Sugeno type fuzzy inference system. The models parameters were adapted using hybrid learning algorithm and input space was fuzzyfied by considering grid partitioning. It is concluded based on the observed data and the estimated data through ANFIS models that neuro-fuzzy system exhibits a satisfactory performance.Keywords: paving stones, physical properties, mechanical properties, ANFIS
Procedia PDF Downloads 3451384 Key Frame Based Video Summarization via Dependency Optimization
Authors: Janya Sainui
Abstract:
As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.Keywords: video summarization, key frame extraction, dependency measure, quadratic mutual information
Procedia PDF Downloads 2671383 Cost Effective Real-Time Image Processing Based Optical Mark Reader
Authors: Amit Kumar, Himanshu Singal, Arnav Bhavsar
Abstract:
In this modern era of automation, most of the academic exams and competitive exams are Multiple Choice Questions (MCQ). The responses of these MCQ based exams are recorded in the Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet requires separate specialized machines for scanning and marking. The sheets used by these machines are special and costs more than a normal sheet. Available process is non-economical and dependent on paper thickness, scanning quality, paper orientation, special hardware and customized software. This study tries to tackle the problem of evaluating the OMR sheet without any special hardware and making the whole process economical. We propose an image processing based algorithm which can be used to read and evaluate the scanned OMR sheets with no special hardware required. It will eliminate the use of special OMR sheet. Responses recorded in normal sheet is enough for evaluation. The proposed system takes care of color, brightness, rotation, little imperfections in the OMR sheet images.Keywords: OMR, image processing, hough circle trans-form, interpolation, detection, binary thresholding
Procedia PDF Downloads 1751382 Kinematic Optimization of Energy Extraction Performances for Flapping Airfoil by Using Radial Basis Function Method and Genetic Algorithm
Authors: M. Maatar, M. Mekadem, M. Medale, B. Hadjed, B. Imine
Abstract:
In this paper, numerical simulations have been carried out to study the performances of a flapping wing used as an energy collector. Metamodeling and genetic algorithms are used to detect the optimal configuration, improving power coefficient and/or efficiency. Radial basis functions and genetic algorithms have been applied to solve this problem. Three optimization factors are controlled, namely dimensionless heave amplitude h₀, pitch amplitude θ₀ and flapping frequency f. ANSYS FLUENT software has been used to solve the principal equations at a Reynolds number of 1100, while the heave and pitch motion of a NACA0015 airfoil has been realized using a developed function (UDF). The results reveal an average power coefficient and efficiency of 0.78 and 0.338 with an inexpensive low-fidelity model and a total relative error of 4.1% versus the simulation. The performances of the simulated optimum RBF-NSGA-II have been improved by 1.2% compared with the validated model.Keywords: numerical simulation, flapping wing, energy extraction, power coefficient, efficiency, RBF, NSGA-II
Procedia PDF Downloads 461381 Kuwait Environmental Remediation Program: Waste Management Data Analytics for Planning and Optimization of Waste Collection
Authors: Aisha Al-Baroud
Abstract:
The United Nations Compensation Commission (UNCC), Kuwait National Focal Point (KNFP) and Kuwait Oil Company (KOC) cooperated in a joint project to undertake comprehensive and collaborative efforts to remediate 26 million m3 of crude oil contaminated soil that had resulted from the Gulf War in 1990/1991. These efforts are referred to as the Kuwait Environmental Remediation Program (KERP). KOC has developed a Total Remediation Solution (TRS) for KERP, which will guide the Remediation projects, comprises of alternative remedial solutions with treatment techniques inclusive of limited landfills for non-treatable soil materials disposal, and relies on treating certain ranges of Total Petroleum Hydrocarbon (TPH) contamination with the most appropriate remediation techniques. The KERP Remediation projects will be implemented within the KOC’s oilfields in North and South East Kuwait. The objectives of this remediation project is to clear land for field development and treat all the oil contaminated features (dry oil lakes, wet oil lakes, and oil contaminated piles) through TRS plan to optimize the treatment processes and minimize the volume of contaminated materials to be placed into landfills. The treatment strategy will comprise of Excavation and Transportation (E&T) of oil contaminated soils from contaminated land to remote treatment areas and to use appropriate remediation technologies or a combination of treatment technologies to achieve remediation target criteria (RTC). KOC has awarded five mega projects to achieve the same and is currently in the execution phase. As a part of the company’s commitment to environment and for the fulfillment of the mandatory HSSEMS procedures, all the Remediation contractors needs to report waste generation data from the various project activities on a monthly basis. Data on waste generation is collected in order to implement cost-efficient and sustainable waste management operations. Data analytics approaches can be built on the top of the data to produce more detailed, and in-time waste generation information for the basis of waste management and collection. The results obtained highlight the potential of advanced data analytic approaches in producing more detailed waste generation information for planning and optimization of waste collection and recycling.Keywords: waste, tencnolgies, KERP, data, soil
Procedia PDF Downloads 1131380 Vortices Structure in Internal Laminar and Turbulent Flows
Authors: Farid Gaci, Zoubir Nemouchi
Abstract:
A numerical study of laminar and turbulent fluid flows in 90° bend of square section was carried out. Three-dimensional meshes, based on hexahedral cells, were generated. The QUICK scheme was employed to discretize the convective term in the transport equations. The SIMPLE algorithm was adopted to treat the velocity-pressure coupling. The flow structure obtained showed interesting features such as recirculation zones and counter-rotating pairs of vortices. The performance of three different turbulence models was evaluated: the standard k- ω model, the SST k-ω model and the Reynolds Stress Model (RSM). Overall, it was found that, the multi-equation model performed better than the two equation models. In fact, the existence of four pairs of counter rotating cells, in the straight duct upstream of the bend, were predicted by the RSM closure but not by the standard eddy viscosity model nor the SST k-ω model. The analysis of the results led to a better understanding of the induced three dimensional secondary flows and the behavior of the local pressure coefficient and the friction coefficient.Keywords: curved duct, counter-rotating cells, secondary flow, laminar, turbulent
Procedia PDF Downloads 3361379 Hierarchical Cluster Analysis of Raw Milk Samples Obtained from Organic and Conventional Dairy Farming in Autonomous Province of Vojvodina, Serbia
Authors: Lidija Jevrić, Denis Kučević, Sanja Podunavac-Kuzmanović, Strahinja Kovačević, Milica Karadžić
Abstract:
In the present study, the Hierarchical Cluster Analysis (HCA) was applied in order to determine the differences between the milk samples originating from a conventional dairy farm (CF) and an organic dairy farm (OF) in AP Vojvodina, Republic of Serbia. The clustering was based on the basis of the average values of saturated fatty acids (SFA) content and unsaturated fatty acids (UFA) content obtained for every season. Therefore, the HCA included the annual SFA and UFA content values. The clustering procedure was carried out on the basis of Euclidean distances and Single linkage algorithm. The obtained dendrograms indicated that the clustering of UFA in OF was much more uniform compared to clustering of UFA in CF. In OF, spring stands out from the other months of the year. The same case can be noticed for CF, where winter is separated from the other months. The results could be expected because the composition of fatty acids content is greatly influenced by the season and nutrition of dairy cows during the year.Keywords: chemometrics, clustering, food engineering, milk quality
Procedia PDF Downloads 2811378 Indexing and Incremental Approach Using Map Reduce Bipartite Graph (MRBG) for Mining Evolving Big Data
Authors: Adarsh Shroff
Abstract:
Big data is a collection of dataset so large and complex that it becomes difficult to process using data base management tools. To perform operations like search, analysis, visualization on big data by using data mining; which is the process of extraction of patterns or knowledge from large data set. In recent years, the data mining applications become stale and obsolete over time. Incremental processing is a promising approach to refreshing mining results. It utilizes previously saved states to avoid the expense of re-computation from scratch. This project uses i2MapReduce, an incremental processing extension to Map Reduce, the most widely used framework for mining big data. I2MapReduce performs key-value pair level incremental processing rather than task level re-computation, supports not only one-step computation but also more sophisticated iterative computation, which is widely used in data mining applications, and incorporates a set of novel techniques to reduce I/O overhead for accessing preserved fine-grain computation states. To optimize the mining results, evaluate i2MapReduce using a one-step algorithm and three iterative algorithms with diverse computation characteristics for efficient mining.Keywords: big data, map reduce, incremental processing, iterative computation
Procedia PDF Downloads 3541377 Artificial Neural Network-Based Short-Term Load Forecasting for Mymensingh Area of Bangladesh
Authors: S. M. Anowarul Haque, Md. Asiful Islam
Abstract:
Electrical load forecasting is considered to be one of the most indispensable parts of a modern-day electrical power system. To ensure a reliable and efficient supply of electric energy, special emphasis should have been put on the predictive feature of electricity supply. Artificial Neural Network-based approaches have emerged to be a significant area of interest for electric load forecasting research. This paper proposed an Artificial Neural Network model based on the particle swarm optimization algorithm for improved electric load forecasting for Mymensingh, Bangladesh. The forecasting model is developed and simulated on the MATLAB environment with a large number of training datasets. The model is trained based on eight input parameters including historical load and weather data. The predicted load data are then compared with an available dataset for validation. The proposed neural network model is proved to be more reliable in terms of day-wise load forecasting for Mymensingh, Bangladesh.Keywords: load forecasting, artificial neural network, particle swarm optimization
Procedia PDF Downloads 1721376 An Alternative Method for Computing Clothoids
Authors: Gerardo Casal, Miguel E. Vázquez-Méndez
Abstract:
The clothoid (also known as Cornu spiral or Euler spiral) is a curve that is characterized because its curvature is proportional to its length. This property makes that it would be widely used as transition curve for designing the layout of roads and railway tracks. In this work, from the geometrical property characterizing the clothoid, its parametric equations are obtained and two algorithms to compute it are compared. The first (classical), is widely used in Surveying Schools and it is based on the use of explicit formulas obtained from Taylor expansions of sine and cosine functions. The second one (alternative) is a very simple algorithm, based on the numerical solution of the initial value problems giving the clothoid parameterization. Both methods are compared in some typical surveying problems. The alternative method does not use complex formulas and so it is conceptually very simple and easy to apply. It gives good results, even if the classical method goes wrong (if the quotient between length and radius of curvature is high), needs no subsequent translations nor rotations and, consequently, it seems an efficient tool for designing the layout of roads and railway tracks.Keywords: transition curves, railroad and highway engineering, Runge-Kutta methods
Procedia PDF Downloads 2841375 Hierarchical Queue-Based Task Scheduling with CloudSim
Authors: Wanqing You, Kai Qian, Ying Qian
Abstract:
The concepts of Cloud Computing provide users with infrastructure, platform and software as service, which make those services more accessible for people via Internet. To better analysis the performance of Cloud Computing provisioning policies as well as resources allocation strategies, a toolkit named CloudSim proposed. With CloudSim, the Cloud Computing environment can be easily constructed by modelling and simulating cloud computing components, such as datacenter, host, and virtual machine. A good scheduling strategy is the key to achieve the load balancing among different machines as well as to improve the utilization of basic resources. Recently, the existing scheduling algorithms may work well in some presumptive cases in a single machine; however they are unable to make the best decision for the unforeseen future. In real world scenario, there would be numbers of tasks as well as several virtual machines working in parallel. Based on the concepts of multi-queue, this paper presents a new scheduling algorithm to schedule tasks with CloudSim by taking into account several parameters, the machines’ capacity, the priority of tasks and the history log.Keywords: hierarchical queue, load balancing, CloudSim, information technology
Procedia PDF Downloads 4241374 Data Collection with Bounded-Sized Messages in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In this paper, we study the data collection problem in Wireless Sensor Networks (WSNs) adopting the two interference models: The graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR). The main issue of the problem is to compute schedules with the minimum number of timeslots, that is, to compute the minimum latency schedules, such that data from every node can be collected without any collision or interference to a sink node. While existing works studied the problem with unit-sized and unbounded-sized message models, we investigate the problem with the bounded-sized message model, and introduce a constant factor approximation algorithm. To the best known of our knowledge, our result is the first result of the data collection problem with bounded-sized model in both interference models.Keywords: data collection, collision-free, interference-free, physical interference model, SINR, approximation, bounded-sized message model, wireless sensor networks
Procedia PDF Downloads 2241373 Robust Numerical Scheme for Pricing American Options under Jump Diffusion Models
Authors: Salah Alrabeei, Mohammad Yousuf
Abstract:
The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. However, most of the option pricing models have no analytical solution. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, we solve the American option under jump diffusion models by using efficient time-dependent numerical methods. several techniques are integrated to reduced the overcome the computational complexity. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). Partial fraction decomposition technique is applied to rational approximation schemes to overcome the complexity of inverting polynomial of matrices. The proposed method is easy to implement on serial or parallel versions. Numerical results are presented to prove the accuracy and efficiency of the proposed method.Keywords: integral differential equations, jump–diffusion model, American options, rational approximation
Procedia PDF Downloads 1231372 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi
Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur
Abstract:
Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi
Procedia PDF Downloads 2461371 Characteristics of Acute Bacterial Prostatitis in Elderly Patients Attended in the Emergency Department
Authors: Carles Ferré, Ferran Llopis, Javier Jacob, Jordi Giol, Xavier Palom, Ignasi Bardés
Abstract:
Objective: To analyze the characteristics of acute bacterial prostatitis (ABP) in elderly patients attended in the emergency department (ED). Methods: Observational and cohort study with prospective follow-up including patients with ABP presenting to the ED from January-December 2012. Data were collected for demographic variables, comorbidities, clinical and microbiological findings, treatment, outcome, and reconsultation at 30 days follow up. Findings were compared between patients ≥ 75 years (study group) and < 75 years (control group). Results: During the study period 241 episodes of ABP were included for analysis. Mean age was 62,9 ± 16 years, and 64 (26.5%) were ≥ 75 years old. A history of prostate adenoma was reported in 54 cases (22,4%), diabetes mellitus in 47 patients (19,5%) and prior manipulation of the lower urinary tract in 40 (17%). Mean symptoms duration was 3.38 ± 4.04 days, voiding symptoms were present in 176 cases (73%) and fever in 154 (64%). From 216 urine cultures, 128 were positive (59%) and 24 (17,6%) out of 136 blood cultures. Escherichia coli was the main pathogen in 58.6% of urine cultures and 64% of blood cultures (with resistant strains to fluoroquinolones in 27,7%, cotrimoxazole in 22,9% and amoxicillin/clavulanic in 27.7% of cases). Seventy patients (29%) were admitted to the hospital, and 3 died. At 30-day follow-up, 29 patients (12%) returned to the ED. In the bivariate analysis previous manipulation of the urinary tract, history of cancer, previous antibiotic treatment, resistant E. coli strains to amoxicillin-clavulanate and ciprofloxacin and extended spectrum beta-lactamase (ESBL) producers, renal impairment, and admission to the hospital were significantly more frequent (p < 0.05) among patients ≥ 75 years compared to those younger than 75 years. Conclusions: Ciprofloxacin and amoxicillin-clavulanate appear not to be good options for the empiric treatment of ABP for patients ≥ 75 years given the drug-resistance pattern in our series, and the proportion of ESBL-producing strains of E. coli should be taken into account. Awaiting bacteria identification and antibiogram from urine and/or blood cultures, treatment on an inpatient basis should be considered in older patients with ABP.Keywords: acute bacterial prostatitits, antibiotic resistance, elderly patients, emergency
Procedia PDF Downloads 3801370 [Keynote]: No-Trust-Zone Architecture for Securing Supervisory Control and Data Acquisition
Authors: Michael Okeke, Andrew Blyth
Abstract:
Supervisory Control And Data Acquisition (SCADA) as the state of the art Industrial Control Systems (ICS) are used in many different critical infrastructures, from smart home to energy systems and from locomotives train system to planes. Security of SCADA systems is vital since many lives depend on it for daily activities and deviation from normal operation could be disastrous to the environment as well as lives. This paper describes how No-Trust-Zone (NTZ) architecture could be incorporated into SCADA Systems in order to reduce the chances of malicious intent. The architecture is made up of two distinctive parts which are; the field devices such as; sensors, PLCs pumps, and actuators. The second part of the architecture is designed following lambda architecture, which is made up of a detection algorithm based on Particle Swarm Optimization (PSO) and Hadoop framework for data processing and storage. Apache Spark will be a part of the lambda architecture for real-time analysis of packets for anomalies detection.Keywords: industrial control system (ics, no-trust-zone (ntz), particle swarm optimisation (pso), supervisory control and data acquisition (scada), swarm intelligence (SI)
Procedia PDF Downloads 3461369 Autonomous Strategic Aircraft Deconfliction in a Multi-Vehicle Low Altitude Urban Environment
Authors: Loyd R. Hook, Maryam Moharek
Abstract:
With the envisioned future growth of low altitude urban aircraft operations for airborne delivery service and advanced air mobility, strategies to coordinate and deconflict aircraft flight paths must be prioritized. Autonomous coordination and planning of flight trajectories is the preferred approach to the future vision in order to increase safety, density, and efficiency over manual methods employed today. Difficulties arise because any conflict resolution must be constrained by all other aircraft, all airspace restrictions, and all ground-based obstacles in the vicinity. These considerations make pair-wise tactical deconfliction difficult at best and unlikely to find a suitable solution for the entire system of vehicles. In addition, more traditional methods which rely on long time scales and large protected zones will artificially limit vehicle density and drastically decrease efficiency. Instead, strategic planning, which is able to respond to highly dynamic conditions and still account for high density operations, will be required to coordinate multiple vehicles in the highly constrained low altitude urban environment. This paper develops and evaluates such a planning algorithm which can be implemented autonomously across multiple aircraft and situations. Data from this evaluation provide promising results with simulations showing up to 10 aircraft deconflicted through a relatively narrow low-altitude urban canyon without any vehicle to vehicle or obstacle conflict. The algorithm achieves this level of coordination beginning with the assumption that each vehicle is controlled to follow an independently constructed flight path, which is itself free of obstacle conflict and restricted airspace. Then, by preferencing speed change deconfliction maneuvers constrained by the vehicles flight envelope, vehicles can remain as close to the original planned path and prevent cascading vehicle to vehicle conflicts. Performing the search for a set of commands which can simultaneously ensure separation for each pair-wise aircraft interaction and optimize the total velocities of all the aircraft is further complicated by the fact that each aircraft's flight plan could contain multiple segments. This means that relative velocities will change when any aircraft achieves a waypoint and changes course. Additionally, the timing of when that aircraft will achieve a waypoint (or, more directly, the order upon which all of the aircraft will achieve their respective waypoints) will change with the commanded speed. Put all together, the continuous relative velocity of each vehicle pair and the discretized change in relative velocity at waypoints resembles a hybrid reachability problem - a form of control reachability. This paper proposes two methods for finding solutions to these multi-body problems. First, an analytical formulation of the continuous problem is developed with an exhaustive search of the combined state space. However, because of computational complexity, this technique is only computable for pairwise interactions. For more complicated scenarios, including the proposed 10 vehicle example, a discretized search space is used, and a depth-first search with early stopping is employed to find the first solution that solves the constraints.Keywords: strategic planning, autonomous, aircraft, deconfliction
Procedia PDF Downloads 971368 Discrete Group Search Optimizer for the Travelling Salesman Problem
Authors: Raed Alnajjar, Mohd Zakree, Ahmad Nazri
Abstract:
In this study, we apply Discrete Group Search Optimizer (DGSO) for solving Traveling Salesman Problem (TSP). The DGSO is a nature inspired optimization algorithm that imitates the animal behavior, especially animal searching behavior. The proposed DGSO uses a vector representation and some discrete operators, such as destruction, construction, differential evolution, swap and insert. The TSP is a well-known hard combinatorial optimization problem, which seeks to find the shortest path among numbers of cities. The performance of the proposed DGSO is evaluated and tested on benchmark instances which listed in LIBTSP dataset. The experimental results show that the performance of the proposed DGSO is comparable with the other methods in the state of the art for some instances. The results show that DGSO outperform Ant Colony System (ACS) in some instances whilst outperform other metaheuristic in most instances. In addition to that, the new results obtained a number of optimal solutions and some best known results. DGSO was able to obtain feasible and good quality solution across all dataset. Procedia PDF Downloads 3241367 Educational Sport and Quality of Life for Children and Teenagers from Brazilian Northeast
Authors: Ricardo Hugo Gonzalez, Amanda Figueiredo Vasconcelos, Francisco Loureiro Neto Monteiro, Yara Luiza Freitas Silva, Ana Cristina Lindsay, Márcia Maria Tavares Machado
Abstract:
The use of sport as an integration mean is a very important tool regarding the social involvement of children and teenagers in a vulnerability situation. This study aims to report the experiences of a multidisciplinary program that intends to improve the quality of life of children and teenagers in Fortaleza, in the Northeast of Brazil. More than 400 children and teenagers aging 11 and 16 years participated in this study. Poor communities experience many particular difficulties in the urban centers such as violence, poor housing conditions, unemployment, lack in health care and deficient physical education in school. Physical education, physiotherapy, odontology, medicine and pharmacy students are responsible for the activities in the project supervised by a general coordinator and a counselor teacher of each academic unit. There are classes about team sports like basketball and soccer. Lectures about sexual behavior and sexually transmitted diseases are ministered beside the ones about oral health education, basic life support education, first aids, use and care with pharmaceuticals and orientations about healthy nutrition. In order to get the children’s family closer, monthly informative lectures are ministered. There is also the concern about reflecting the actions and producing academic paperwork such as graduation final projects and books. The number of participants has oscillated lately, and one of the causes is the lack of practicing physical activities and sports regularly. However, 250 teenagers have participated regularly for at least two years. These teenagers have shown a healthier lifestyle and a better physical fitness profile. The resources for maintaining the project come from the Pro-Reitoria of Extension, Federal University of Ceara, as well as from the PROEXT/MEC, Federal Government. Actions of this nature need to be done thinking for long periods so the effects results can become effective. Public and private investments are needed due to low socioeconomic families who are most vulnerable and have fewer opportunities to enhance to health prevention services.Keywords: children and teenagers, health, multidisciplinary program, quality of life
Procedia PDF Downloads 2441366 Characteristics of Sorghum (Sorghum bicolor L. Moench) Flour on the Soaking Time of Peeled Grains and Particle Size Treatment
Authors: Sri Satya Antarlina, Elok Zubaidah, Teti Istiana, Harijono
Abstract:
Sorghum bicolor (Sorghum bicolor L. Moench) has the potential as a flour for gluten-free food products. Sorghum flour production needs grain soaking treatment. Soaking can reduce the tannin content which is an anti-nutrient, so it can increase the protein digestibility. Fine particle size decreases the yield of flour, so it is necessary to study various particle sizes to increase the yield. This study aims to determine the characteristics of sorghum flour in the treatment of soaking peeled grain and particle size. The material of white sorghum varieties KD-4 from farmers in East Java, Indonesia. Factorial randomized factorial design (two factors), repeated three times, factor I were the time of grain soaking (five levels) that were 0, 12, 24, 36, and 48 hours, factor II was the size of the starch particles sifted with a fineness level of 40, 60, 80, and 100 mesh. The method of making sorghum flour is grain peeling, soaking peeled grain, drying using the oven at 60ᵒC, milling, and sieving. Physico-chemical analysis of sorghum flour. The results show that there is an interaction between soaking time of grain with the size of sorghum flour particles. Interaction in yield of flour, L* color (brightness level), whiteness index, paste properties, amylose content, protein content, bulk density, and protein digestibility. The method of making sorghum flour through the soaking of peeled grain and the difference in particle size has an important role in producing the physicochemical properties of the specific flour. Based on the characteristics of sorghum flour produced, it is determined the method of making sorghum flour through sorghum grain soaking for 24 hours, the particle size of flour 80 mesh. The sorghum flour with characteristic were 24.88% yield of flour, 88.60 color L* (brightness level), 69.95 whiteness index, 3615 Cp viscosity, 584.10 g/l of bulk density, 24.27% db protein digestibility, 90.02% db starch content, 23.4% db amylose content, 67.45% db amylopectin content, 0.22% db crude fiber content, 0.037% db tannin content, 5.30% db protein content, ash content 0.18% db, carbohydrate content 92.88 % db, and 1.94% db fat content. The sorghum flour is recommended for cookies products.Keywords: characteristic, sorghum (Sorghum bicolor L. Moench) flour, grain soaking, particle size, physicochemical properties
Procedia PDF Downloads 1631365 A Critical Case Study of Women Police in Ranchi, Jharkhand, India: An Analysis of Work Life Balance of Women in Jharkhand, India
Authors: Swati Minz, Pradeep Munda, Ranchi Jharkhand
Abstract:
Women of today’s era are well educated and they are best and proficient at their skills that are key to success anywhere. Government played a major role in uplifting women in Indian society. Through all these efforts Indian women decided to move forward and started choosing career path which was itself a challenge in their life. The people in the society had a very hatred feeling for the women who chose a career and moved forward. Women in today’s times have achieved a lot but in reality they have to still travel a long way. Women started leaving the secured domains of their home and moved out, but a harsh, cruel, exploitative world awaits them, where women have to prove their talent against the world who see women as merely vassals of producing children. In spite all modernisation, a woman has her limits and emerges to claim traditional male space, juggling with many family problems and multiple roles to excel at a level that would have been perceived as impossible a generation ago. Still a woman in India is storming traditional male fields. Even the occupation which had male monopoly life defense services, merchant navy, administrative or police services, these are the best examples for women now. If these women are taken under consideration they never had any issues while fighting a battle ,or trying to encroach into the men’s world ,but rather, they adopts themselves in the situation and are good ,trying to justify their roles and proving themselves. The last few decades there have been noticed an enormous growth in levels of education, confidence and the most importantly, ambition noticed towards in women, who all are striving their rights and claiming a dignified place in the society. Previously women were educated for the sake to get married and start new family but nowadays they utilize their skill productively. Since the time after independence, considering both women in India in general and women in Jharkhand in particular has played a very prominent role in all walks of life including the professions. Any success and achievement in any organisation depends on their contribution as well. Due to these consequences, there has always been a need to study and focus light on issues affecting women professionals, empowerment and their work life balance.Keywords: women, work life balance, work empowerment, career, struggle, society, challenges, family, society, achievement
Procedia PDF Downloads 3881364 A Stokes Optimal Control Model of Determining Cellular Interaction Forces during Gastrulation
Authors: Yuanhao Gao, Ping Lin, Kees Weijer
Abstract:
An optimal control system model is proposed for the cell flow in the process of chick embryo gastrulation in this paper. The target is to determine the cellular interaction forces which are hard to measure. This paper will take an approach to investigate the forces with the idea of the inverse problem. By choosing the forces as the control variable and regarding the cell flow as Stokes fluid, an objective functional will be established to match the numerical result of cell velocity with the experimental data. So that the forces could be determined by minimizing the objective functional. The Lagrange multiplier method is utilized to derive the state and adjoint equations consisting the optimal control system, which specifies the first-order necessary conditions. Finite element method is used to discretize and approximate equations. A conjugate gradient algorithm is given for solving the minimum solution of the system and determine the forces.Keywords: optimal control model, Stokes equation, conjugate gradient method, finite element method, chick embryo gastrulation
Procedia PDF Downloads 2601363 Inferring Human Mobility in India Using Machine Learning
Authors: Asra Yousuf, Ajaykumar Tannirkulum
Abstract:
Inferring rural-urban migration trends can help design effective policies that promote better urban planning and rural development. In this paper, we describe how machine learning algorithms can be applied to predict internal migration decisions of people. We consider data collected from household surveys in Tamil Nadu to train our model. To measure the performance of the model, we use data on past migration from National Sample Survey Organisation of India. The factors for training the model include socioeconomic characteristic of each individual like age, gender, place of residence, outstanding loans, strength of the household, etc. and his past migration history. We perform a comparative analysis of the performance of a number of machine learning algorithm to determine their prediction accuracy. Our results show that machine learning algorithms provide a stronger prediction accuracy as compared to statistical models. Our goal through this research is to propose the use of data science techniques in understanding human decisions and behaviour in developing countries.Keywords: development, migration, internal migration, machine learning, prediction
Procedia PDF Downloads 2711362 Numerical Simulation of Flow Past Inline Tandem Cylinders in Uniform Shear Flow
Authors: Rajesh Bhatt, Dilip Kumar Maiti
Abstract:
The incompressible shear flow past a square cylinder placed parallel to a plane wall of side length A in presence of upstream rectangular cylinder of height 0.5A and width 0.25A in an inline tandem arrangement are numerically investigated using finite volume method. The discretized equations are solved by an implicit, time-marching, pressure correction based SIMPLE algorithm. This study provides the qualitative insight in to the dependency of basic structure (i.e. vortex shedding or suppression) of flow over the downstream square cylinder and the upstream rectangular cylinder (and hence the aerodynamic characteristics) on inter-cylinder spacing (S) and Reynolds number (Re). The spacing between the cylinders is varied systematically from S = 0.5A to S = 7.0A so the sensitivity of the flow structure between the cylinders can be inspected. A sudden jump in strouhal number is observed, which shows the transition of flow pattern in the wake of the cylinders. The results are presented at Re = 100 and 200 in term of Strouhal number, RMS and mean of lift and drag coefficients and contour plots for different spacing.Keywords: square cylinder, vortex shedding, isolated, tandem arrangement, spacing distance
Procedia PDF Downloads 550