Search results for: real size nozzle
2886 Automation of the Maritime UAV Command, Control, Navigation Operations, Simulated in Real-Time Using Kinect Sensor: A Feasibility Study
Authors: Regius Asiimwe, Amir Anvar
Abstract:
This paper describes the process used in the automation of the Maritime UAV commands using the Kinect sensor. The AR Drone is a Quadrocopter manufactured by Parrot [1] to be controlled using the Apple operating systems such as iPhones and Ipads. However, this project uses the Microsoft Kinect SDK and Microsoft Visual Studio C# (C sharp) software, which are compatible with Windows Operating System for the automation of the navigation and control of the AR drone. The navigation and control software for the Quadrocopter runs on a windows 7 computer. The project is divided into two sections; the Quadrocopter control system and the Kinect sensor control system. The Kinect sensor is connected to the computer using a USB cable from which commands can be sent to and from the Kinect sensors. The AR drone has Wi-Fi capabilities from which it can be connected to the computer to enable transfer of commands to and from the Quadrocopter. The project was implemented in C#, a programming language that is commonly used in the automation systems. The language was chosen because there are more libraries already established in C# for both the AR drone and the Kinect sensor. The study will contribute toward research in automation of systems using the Quadrocopter and the Kinect sensor for navigation involving a human operator in the loop. The prototype created has numerous applications among which include the inspection of vessels such as ship, airplanes and areas that are not accessible by human operators.Keywords: UAV, AR drone, Kinect Sensors, Automation, Real time, C sharp, Microsoft Kinect SDK.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29292885 Optimal Placement of Capacitors for Achieve the Best Total Generation Cost by Genetic Algorithm
Authors: Mohammad Reza Tabatabaei, Mohammad Bagher Haddadi, Mojtaba Saeedimoghadam, Ali Vaseghi Ardekani
Abstract:
Economic Dispatch (ED) is one of the most challenging problems of power system since it is difficult to determine the optimum generation scheduling to meet the particular load demand with the minimum fuel costs while all constraints are satisfied. The objective of the Economic Dispatch Problems (EDPs) of electric power generation is to schedule the committed generating units outputs so as to meet the required load demand at minimum operating cost while satisfying all units and system equality and inequality constraints. In this paper, an efficient and practical steady-state genetic algorithm (SSGAs) has been proposed for solving the economic dispatch problem. The objective is to minimize the total generation fuel cost and keep the power flows within the security limits. To achieve that, the present work is developed to determine the optimal location and size of capacitors in transmission power system where, the Participation Factor Algorithm and the Steady State Genetic Algorithm are proposed to select the best locations for the capacitors and determine the optimal size for them.
Keywords: Economic Dispatch, Lagrange, Capacitors Placement, Losses Reduction, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16712884 Design and Development of 5-DOF Color Sorting Manipulator for Industrial Applications
Authors: Atef. A. Ata, Sohair F. Rezeka, Ahmed El-Shenawy, Mohammed Diab
Abstract:
Image processing in today’s world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system applications which consists of two integrated stations of processing and handling with a new image processing feature. Existing color sorting techniques use a set of inductive, capacitive, and optical sensors to differentiate object color. This research presents a mechatronic color sorting system solution with the application of image processing. A 5-DOF robot arm is designed and developed with pick and place operation to act as the main part of the color sorting system. Image processing procedure senses the circular objects in an image captured in real time by a webcam fixed at the end-effector then extracts color and position information out of it. This information is passed as a sequence of sorting commands to the manipulator that has pick-and-place mechanism. Performance analysis proves that this color based object sorting system works accurately under ideal condition in term of adequate illumination, circular objects shape and color. The circular objects tested for sorting are red, green and blue. For non-ideal condition, such as unspecified color the accuracy reduces to 80%.
Keywords: Robotics manipulator, 5-DOF manipulator, image processing, Color sorting, Pick-and-place.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42162883 Flexible Wormhole-Switched Network-on-chip with Two-Level Priority Data Delivery Service
Authors: Faizal A. Samman, Thomas Hollstein, Manfred Glesner
Abstract:
A synchronous network-on-chip using wormhole packet switching and supporting guaranteed-completion best-effort with low-priority (LP) and high-priority (HP) wormhole packet delivery service is presented in this paper. Both our proposed LP and HP message services deliver a good quality of service in term of lossless packet completion and in-order message data delivery. However, the LP message service does not guarantee minimal completion bound. The HP packets will absolutely use 100% bandwidth of their reserved links if the HP packets are injected from the source node with maximum injection. Hence, the service are suitable for small size messages (less than hundred bytes). Otherwise the other HP and LP messages, which require also the links, will experience relatively high latency depending on the size of the HP message. The LP packets are routed using a minimal adaptive routing, while the HP packets are routed using a non-minimal adaptive routing algorithm. Therefore, an additional 3-bit field, identifying the packet type, is introduced in their packet headers to classify and to determine the type of service committed to the packet. Our NoC prototypes have been also synthesized using a 180-nm CMOS standard-cell technology to evaluate the cost of implementing the combination of both services.Keywords: Network-on-Chip, Parallel Pipeline Router Architecture, Wormhole Switching, Two-Level Priority Service.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17652882 Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation
Authors: Gautam Sarkar, Anjan Rakshit, Amitava Chatterjee, Kesab Bhattacharya
Abstract:
The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.
Keywords: Capacitance measurement, NAND Schmitt trigger, microcontroller, GUI, lead compensation, hysteresis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73692881 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution
Authors: Saleem Z. Ramadan
Abstract:
This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.
Keywords: Reliability, Accelerated life testing, Cumulative exposure model, Bayesian estimation, Progressive Type-I censoring, Weibull distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21582880 A New Fast Skin Color Detection Technique
Authors: Tarek M. Mahmoud
Abstract:
Skin color can provide a useful and robust cue for human-related image analysis, such as face detection, pornographic image filtering, hand detection and tracking, people retrieval in databases and Internet, etc. The major problem of such kinds of skin color detection algorithms is that it is time consuming and hence cannot be applied to a real time system. To overcome this problem, we introduce a new fast technique for skin detection which can be applied in a real time system. In this technique, instead of testing each image pixel to label it as skin or non-skin (as in classic techniques), we skip a set of pixels. The reason of the skipping process is the high probability that neighbors of the skin color pixels are also skin pixels, especially in adult images and vise versa. The proposed method can rapidly detect skin and non-skin color pixels, which in turn dramatically reduce the CPU time required for the protection process. Since many fast detection techniques are based on image resizing, we apply our proposed pixel skipping technique with image resizing to obtain better results. The performance evaluation of the proposed skipping and hybrid techniques in terms of the measured CPU time is presented. Experimental results demonstrate that the proposed methods achieve better result than the relevant classic method.Keywords: Adult images filtering, image resizing, skin color detection, YcbCr color space.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40032879 An Analysis of Economic Capital Allocation of Global Banks
Authors: Petr Teply, Ondrej Vejdovec
Abstract:
There are three main ways of categorizing capital in banking operations: accounting, regulatory and economic capital. However, the 2008-2009 global crisis has shown that none of these categories adequately reflects the real risks of bank operations, especially in light of the failures Bear Stearns, Lehman Brothers or Northern Rock. This paper deals with the economic capital allocation of global banks. In theory, economic capital should reflect the real risks of a bank and should be publicly available. Yet, as discovered during the global financial crisis, even when economic capital information was publicly disclosed, the underlying assumptions rendered the information useless. Specifically, some global banks that reported relatively high levels of economic capital before the crisis went bankrupt or had to be bailed-out by their government. And, only 15 out of 50 global banks reported their economic capital during the 2007-2010 period. In this paper, we analyze the changes in reported bank economic capital disclosure during this period. We conclude that relative shares of credit and business risks increased in 2010 compared to 2007, while both operational and market risks decreased their shares on the total economic capital of top-rated global banks. Generally speaking, higher levels of disclosure and transparency of bank operations are required to obtain more confidence from stakeholders. Moreover, additional risks such as liquidity risks should be included in these disclosures.
Keywords: global crisis, economic capital, risk management, risk allocation, bank
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29752878 An Educational Data Mining System for Advising Higher Education Students
Authors: Heba Mohammed Nagy, Walid Mohamed Aly, Osama Fathy Hegazy
Abstract:
Educational data mining is a specific data mining field applied to data originating from educational environments, it relies on different approaches to discover hidden knowledge from the available data. Among these approaches are machine learning techniques which are used to build a system that acquires learning from previous data. Machine learning can be applied to solve different regression, classification, clustering and optimization problems.
In our research, we propose a “Student Advisory Framework” that utilizes classification and clustering to build an intelligent system. This system can be used to provide pieces of consultations to a first year university student to pursue a certain education track where he/she will likely succeed in, aiming to decrease the high rate of academic failure among these students. A real case study in Cairo Higher Institute for Engineering, Computer Science and Management is presented using real dataset collected from 2000−2012.The dataset has two main components: pre-higher education dataset and first year courses results dataset. Results have proved the efficiency of the suggested framework.
Keywords: Classification, Clustering, Educational Data Mining (EDM), Machine Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52122877 Probability and Instruction Effects in Syllogistic Conditional Reasoning
Authors: Olimpia Matarazzo, Ivana Baldassarre
Abstract:
The main aim of this study was to examine whether people understand indicative conditionals on the basis of syntactic factors or on the basis of subjective conditional probability. The second aim was to investigate whether the conditional probability of q given p depends on the antecedent and consequent sizes or derives from inductive processes leading to establish a link of plausible cooccurrence between events semantically or experientially associated. These competing hypotheses have been tested through a 3 x 2 x 2 x 2 mixed design involving the manipulation of four variables: type of instructions (“Consider the following statement to be true", “Read the following statement" and condition with no conditional statement); antecedent size (high/low); consequent size (high/low); statement probability (high/low). The first variable was between-subjects, the others were within-subjects. The inferences investigated were Modus Ponens and Modus Tollens. Ninety undergraduates of the Second University of Naples, without any prior knowledge of logic or conditional reasoning, participated in this study. Results suggest that people understand conditionals in a syntactic way rather than in a probabilistic way, even though the perception of the conditional probability of q given p is at least partially involved in the conditionals- comprehension. They also showed that, in presence of a conditional syllogism, inferences are not affected by the antecedent or consequent sizes. From a theoretical point of view these findings suggest that it would be inappropriate to abandon the idea that conditionals are naturally understood in a syntactic way for the idea that they are understood in a probabilistic way.Keywords: Conditionals, conditional probability, conditional syllogism, inferential task.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15602876 Identification of Microbial Community in an Anaerobic Reactor Treating Brewery Wastewater
Authors: Abimbola M. Enitan, John O. Odiyo, Feroz M. Swalaha
Abstract:
The study of microbial ecology and their function in anaerobic digestion processes are essential to control the biological processes. This is to know the symbiotic relationship between the microorganisms that are involved in the conversion of complex organic matter in the industrial wastewater to simple molecules. In this study, diversity and quantity of bacterial community in the granular sludge taken from the different compartments of a full-scale upflow anaerobic sludge blanket (UASB) reactor treating brewery wastewater was investigated using polymerase chain reaction (PCR) and real-time quantitative PCR (qPCR). The phylogenetic analysis showed three major eubacteria phyla that belong to Proteobacteria, Firmicutes and Chloroflexi in the full-scale UASB reactor, with different groups populating different compartment. The result of qPCR assay showed high amount of eubacteria with increase in concentration along the reactor’s compartment. This study extends our understanding on the diverse, topological distribution and shifts in concentration of microbial communities in the different compartments of a full-scale UASB reactor treating brewery wastewater. The colonization and the trophic interactions among these microbial populations in reducing and transforming complex organic matter within the UASB reactors were established.
Keywords: Bacteria, brewery wastewater, real-time quantitative PCR, UASB reactor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10962875 Effect of Twin Cavities on the Axially Loaded Pile in Clay
Authors: Ali A. Al-Jazaairry, Tahsin T. Sabbagh
Abstract:
Presence of cavities in soil predictably induces ground deformation and changes in soil stress, which might influence adjacent existing pile foundations, though the effect of twin cavities on a nearby pile needs to be understood. This research is an attempt to identify the behaviour of piles subjected to axial load and embedded in cavitied clayey soil. A series of finite element modelling were conducted to investigate the performance of piled foundation located in such soils. The validity of the numerical simulation was evaluated by comparing it with available field test and alternative analytical model. The study involved many parameters such as twin cavities size, depth, spacing between cavities, and eccentricity of cavities from the pile axis on the pile performance subjected to axial load. The study involved many cases; in each case, a critical value has been found in which cavities’ presence has shown minimum impact on the behaviour of pile. Load-displacement relationships of the affecting parameters on the pile behaviour were presented to provide helpful information for designing piled foundation situated near twin underground cavities. It was concluded that the presence of the cavities within the soil mass reduces the ultimate capacity of pile. This reduction differs according to the size and location of the cavity.
Keywords: Axial load, clay, finite element, pile, twin cavities, ultimate capacity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12522874 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models
Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales
Abstract:
The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.
Keywords: Concrete bridges, deterioration, Markov chains, probability matrix.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14402873 Automatic Detection of Defects in Ornamental Limestone Using Wavelets
Authors: Maria C. Proença, Marco Aniceto, Pedro N. Santos, José C. Freitas
Abstract:
A methodology based on wavelets is proposed for the automatic location and delimitation of defects in limestone plates. Natural defects include dark colored spots, crystal zones trapped in the stone, areas of abnormal contrast colors, cracks or fracture lines, and fossil patterns. Although some of these may or may not be considered as defects according to the intended use of the plate, the goal is to pair each stone with a map of defects that can be overlaid on a computer display. These layers of defects constitute a database that will allow the preliminary selection of matching tiles of a particular variety, with specific dimensions, for a requirement of N square meters, to be done on a desktop computer rather than by a two-hour search in the storage park, with human operators manipulating stone plates as large as 3 m x 2 m, weighing about one ton. Accident risks and work times are reduced, with a consequent increase in productivity. The base for the algorithm is wavelet decomposition executed in two instances of the original image, to detect both hypotheses – dark and clear defects. The existence and/or size of these defects are the gauge to classify the quality grade of the stone products. The tuning of parameters that are possible in the framework of the wavelets corresponds to different levels of accuracy in the drawing of the contours and selection of the defects size, which allows for the use of the map of defects to cut a selected stone into tiles with minimum waste, according the dimension of defects allowed.
Keywords: Automatic detection, wavelets, defects, fracture lines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11642872 Application of Systems Engineering Tools and Methods to Improve Healthcare Delivery Inside the Emergency Department of a Mid-Size Hospital
Authors: Mohamed Elshal, Hazim El-Mounayri, Omar El-Mounayri
Abstract:
Emergency department (ED) is considered as a complex system of interacting entities: patients, human resources, software and hardware systems, interfaces, and other systems. This paper represents a research for implementing a detailed Systems Engineering (SE) approach in a mid-size hospital in central Indiana. This methodology will be applied by “The Initiative for Product Lifecycle Innovation (IPLI)” institution at Indiana University to study and solve the crowding problem with the aim of increasing throughput of patients and enhance their treatment experience; therefore, the nature of crowding problem needs to be investigated with all other problems that leads to it. The presented SE methods are workflow analysis and systems modeling where SE tools such as Microsoft Visio are used to construct a group of system-level diagrams that demonstrate: patient’s workflow, documentation and communication flow, data systems, human resources workflow and requirements, leadership involved, and integration between ER different systems. Finally, the ultimate goal will be managing the process through implementation of an executable model using commercialized software tools, which will identify bottlenecks, improve documentation flow, and help make the process faster.Keywords: Systems modeling, ED operation, workflow modeling, systems analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10412871 Enhancing Performance of Bluetooth Piconets Using Priority Scheduling and Exponential Back-Off Mechanism
Authors: Dharmendra Chourishi “Maitraya”, Sridevi Seshadri
Abstract:
Bluetooth is a personal wireless communication technology and is being applied in many scenarios. It is an emerging standard for short range, low cost, low power wireless access technology. Current existing MAC (Medium Access Control) scheduling schemes only provide best-effort service for all masterslave connections. It is very challenging to provide QoS (Quality of Service) support for different connections due to the feature of Master Driven TDD (Time Division Duplex). However, there is no solution available to support both delay and bandwidth guarantees required by real time applications. This paper addresses the issue of how to enhance QoS support in a Bluetooth piconet. The Bluetooth specification proposes a Round Robin scheduler as possible solution for scheduling the transmissions in a Bluetooth Piconet. We propose an algorithm which will reduce the bandwidth waste and enhance the efficiency of network. We define token counters to estimate traffic of real-time slaves. To increase bandwidth utilization, a back-off mechanism is then presented for best-effort slaves to decrease the frequency of polling idle slaves. Simulation results demonstrate that our scheme achieves better performance over the Round Robin scheduling.Keywords: Piconet, Medium Access Control, Polling algorithm, Scheduling, QoS, Time Division Duplex (TDD).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16992870 A Case Study in Using the Can-Sized Satellite Platforms for Interdisciplinary Problem-Based Learning in Aeronautical and Electronic Engineering
Authors: Michael Johnson, Vincenzo Oliveri
Abstract:
This work considers an interdisciplinary Problem-Based Learning (PBL) project developed by lecturers from the Aeronautical and Electronic and Computer Engineering departments at the University of Limerick. This “CANSAT” project utilises the CanSat can-sized satellite platform in order to allow students from aeronautical and electronic engineering to engage in a mixed format (online/face-to-face), interdisciplinary PBL assignment using a real-world platform and application. The project introduces students to the design, development, and construction of the CanSat system over the course of a single semester, enabling student(s) to apply their aeronautical and technical skills/capabilities to the realisation of a working CanSat system. In this case study, the CanSat kits are used to pivot the real-world, discipline-relevant PBL goal of designing, building, and testing the CanSat system with payload(s) from a traditional module-based setting to an online PBL setting. Feedback, impressions, benefits, and challenges identified through the semester are presented. Students found the project to be interesting and rewarding, with the interdisciplinary nature of the project appealing to them. Challenges and difficulties encountered are also addressed, with solutions developed between the students and facilitators to overcoming these discussed.
Keywords: Problem-Based Learning, Online PBL, Electronic Engineering, Aeronautical Engineering, Interdisciplinary Project, CanSat.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4662869 Long-term Irrigation with Dairy Factory Wastewater Influences Soil Quality
Authors: Yen-Yiu Liu, Richard J. Haynes
Abstract:
The effects of irrigation with dairy factory wastewater on soil properties were investigated at two sites that had received irrigation for > 60 years. Two adjoining paired sites that had never received DFE were also sampled as well as another seven fields from a wider area around the factory. In comparison with paired sites that had not received effluent, long-term wastewater irrigation resulted in an increase in pH, EC, extractable P, exchangeable Na and K and ESP. These changes were related to the use of phosphoric acid, NaOH and KOH as cleaning agents in the factory. Soil organic C content was unaffected by DFE irrigation but the size (microbial biomass C and N) and activity (basal respiration) of the soil microbial community were increased. These increases were attributed to regular inputs of soluble C (e.g. lactose) present as milk residues in the wastewater. Principal component analysis (PCA) of the soils data from all 11sites confirmed that the main effects of DFE irrigation were an increase in exchangeable Na, extractable P and microbial biomass C, an accumulation of soluble salts and a liming effect. PCA analysis of soil bacterial community structure, using PCR-DGGE of 16S rDNA fragments, generally separated individual sites from one another but did not group them according to irrigation history. Thus, whilst the size and activity of the soil microbial community were increased, the structure and diversity of the bacterial community remained unaffected.Keywords: Dairy factory, wastewater; effluent, irrigation, soil quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15742868 Long- term Irrigation with Dairy Factory Wastewater Influences Soil Quality
Authors: Yen-Yiu Liu, Richard J. Haynes
Abstract:
The effects of irrigation with dairy factory wastewater on soil properties were investigated at two sites that had received irrigation for > 60 years. Two adjoining paired sites that had never received DFE were also sampled as well as another seven fields from a wider area around the factory. In comparison with paired sites that had not received effluent, long-term wastewater irrigation resulted in an increase in pH, EC, extractable P, exchangeable Na and K and ESP. These changes were related to the use of phosphoric acid, NaOH and KOH as cleaning agents in the factory. Soil organic C content was unaffected by DFE irrigation but the size (microbial biomass C and N) and activity (basal respiration) of the soil microbial community were increased. These increases were attributed to regular inputs of soluble C (e.g. lactose) present as milk residues in the wastewater. Principal component analysis (PCA) of the soils data from all 11sites confirmed that the main effects of DFE irrigation were an increase in exchangeable Na, extractable P and microbial biomass C, an accumulation of soluble salts and a liming effect. PCA analysis of soil bacterial community structure, using PCR-DGGE of 16S rDNA fragments, generally separated individual sites from one another but did not group them according to irrigation history. Thus, whilst the size and activity of the soil microbial community were increased, the structure and diversity of the bacterial community remained unaffected.
Keywords: Dairy factory, wastewater; effluent, irrigation, soil quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20302867 Development and in vitro Characterization of Self-nanoemulsifying Drug Delivery Systems of Valsartan
Authors: P. S. Rajinikanth, Yeoh Suyu, Sanjay Garg
Abstract:
The present study is aim to prepare and evaluate the selfnanoemulsifying drug delivery (SNEDDS) system of a poorly water soluble drug valsartan in order to achieve a better dissolution rate which would further help in enhancing oral bioavailability. The present research work describes a SNEDDS of valsartan using labrafil M 1944 CS, Tween 80 and Transcutol HP. The pseudoternary phase diagrams with presence and absence of drug were plotted to check for the emulsification range and also to evaluate the effect of valsartan on the emulsification behavior of the phases. The mixtures consisting of oil (labrafil M 1944 CS) with surfactant (tween 80), co-surfactant (Transcutol HP) were found to be optimum formulations. Prepared formulations were evaluated for its particle size distribution, nanoemulsifying properties, robustness to dilution, self emulsication time, turbidity measurement, drug content and invitro dissolution. The optimized formulations are further evaluated for heating cooling cycle, centrifugation studies, freeze thaw cycling, particle size distribution and zeta potential were carried out to confirm the stability of the formed SNEDDS formulations. The prepared formulation revealed t a significant improvement in terms of the drug solubility as compared with marketed tablet and pure drug.
Keywords: Self Emulsifying Drug Delivery System, Valsartan, Bioavailability, poorly soluble drug.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26792866 Estimation of Thermal Conductivity of Nanofluids Using MD-Stochastic Simulation Based Approach
Authors: Sujoy Das, M. M. Ghosh
Abstract:
The thermal conductivity of a fluid can be significantly enhanced by dispersing nano-sized particles in it, and the resultant fluid is termed as "nanofluid". A theoretical model for estimating the thermal conductivity of a nanofluid has been proposed here. It is based on the mechanism that evenly dispersed nanoparticles within a nanofluid undergo Brownian motion in course of which the nanoparticles repeatedly collide with the heat source. During each collision a rapid heat transfer occurs owing to the solidsolid contact. Molecular dynamics (MD) simulation of the collision of nanoparticles with the heat source has shown that there is a pulselike pick up of heat by the nanoparticles within 20-100 ps, the extent of which depends not only on thermal conductivity of the nanoparticles, but also on the elastic and other physical properties of the nanoparticle. After the collision the nanoparticles undergo Brownian motion in the base fluid and release the excess heat to the surrounding base fluid within 2-10 ms. The Brownian motion and associated temperature variation of the nanoparticles have been modeled by stochastic analysis. Repeated occurrence of these events by the suspended nanoparticles significantly contributes to the characteristic thermal conductivity of the nanofluids, which has been estimated by the present model for a ethylene glycol based nanofluid containing Cu-nanoparticles of size ranging from 8 to 20 nm, with Gaussian size distribution. The prediction of the present model has shown a reasonable agreement with the experimental data available in literature.
Keywords: Brownian dynamics, Molecular dynamics, Nanofluid, Thermal conductivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22622865 Nanoparticles-Protein Hybrid Based Magnetic Liposome
Authors: Amlan Kumar Das, Avinash Marwal, Vikram Pareek
Abstract:
Liposome plays an important role in medical and pharmaceutical science as e.g. nano scale drug carriers. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment. Magnet-driven liposome used for the targeted delivery of drugs to organs and tissues. These liposome preparations contain encapsulated drug components and finely dispersed magnetic particles. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment that are generated in vitro. These are useful in terms of biocompatibility, biodegradability, and low toxicity, and can control biodistribution by changing the size, lipid composition, and physical characteristics. Furthermore, liposomes can entrap both hydrophobic and hydrophilic drugs and are able to continuously release the entrapped substrate, thus being useful drug carriers. Magnetic liposomes (MLs) are phospholipid vesicles that encapsulate magneticor paramagnetic nanoparticles. They are applied as contrast agents for magnetic resonance imaging (MRI). The biological synthesis of nanoparticles using plant extracts plays an important role in the field of nanotechnology. Green-synthesized magnetite nanoparticles-protein hybrid has been produced by treating Iron (III) / Iron (II) chloride with the leaf extract of Datura inoxia. The phytochemicals present in the leaf extracts act as a reducing as well stabilizing agents preventing agglomeration, which include flavonoids, phenolic compounds, cardiac glycosides, proteins and sugars. The magnetite nanoparticles-protein hybrid has been trapped inside the aqueous core of the liposome prepared by reversed phase evaporation (REV) method using oleic and linoleic acid which has been shown to be driven under magnetic field confirming the formation magnetic liposome (ML). Chemical characterization of stealth magnetic liposome has been performed by breaking the liposome and release of magnetic nanoparticles. The presence iron has been confirmed by colour complex formation with KSCN and UV-Vis study using spectrophotometer Cary 60, Agilent. This magnet driven liposome using nanoparticles-protein hybrid can be a smart vesicles for the targeted drug delivery.
Keywords: Nanoparticles-Protein Hybrid, Magnetic Liposome.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30212864 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game
Authors: Steven W. Carruthers
Abstract:
The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.Keywords: Effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10142863 Using Scanning Electron Microscope and Computed Tomography for Concrete Diagnostics of Airfield Pavements
Authors: M. Linek
Abstract:
This article presents the comparison of selected evaluation methods regarding microstructure modification of hardened cement concrete intended for airfield pavements. Basic test results were presented for two pavement quality concrete lots. Analysis included standard concrete used for airfield pavements and modern material solutions based on concrete composite modification. In case of basic grain size distribution of concrete cement CEM I 42,5HSR NA, fine aggregate and coarse aggregate fractions in the form of granite chippings, water and admixtures were considered. In case of grain size distribution of modified concrete, the use of modern modifier as substitute of fine aggregate was suggested. Modification influence on internal concrete structure parameters using scanning electron microscope was defined. Obtained images were compared to the results obtained using computed tomography. Opportunity to use this type of equipment for internal concrete structure diagnostics and an attempt of its parameters evaluation was presented. Obtained test results enabled to reach a conclusion that both methods can be applied for pavement quality concrete diagnostics, with particular purpose of airfield pavements.Keywords: Scanning electron microscope, computed tomography, cement concrete, airfield pavements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11132862 Statistical Analysis of Parameters Effects on Maximum Strain and Torsion Angle of FRP Honeycomb Sandwich Panels Subjected to Torsion
Authors: Mehdi Modabberifar, Milad Roodi, Ehsan Souri
Abstract:
In recent years, honeycomb fiber reinforced plastic (FRP) sandwich panels have been increasingly used in various industries. Low weight, low price and high mechanical strength are the benefits of these structures. However, their mechanical properties and behavior have not been fully explored. The objective of this study is to conduct a combined numerical-statistical investigation of honeycomb FRP sandwich beams subject to torsion load. In this paper, the effect of geometric parameters of sandwich panel on maximum shear strain in both face and core and angle of torsion in a honeycomb FRP sandwich structures in torsion is investigated. The effect of Parameters including core thickness, face skin thickness, cell shape, cell size, and cell thickness on mechanical behavior of the structure were numerically investigated. Main effects of factors were considered in this paper and regression equations were derived. Taguchi method was employed as experimental design and an optimum parameter combination for the maximum structure stiffness has been obtained. The results showed that cell size and face skin thickness have the most significant impacts on torsion angle, maximum shear strain in face and core.Keywords: Finite element, honeycomb FRP sandwich panel, torsion, civil engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26182861 Vision-Based Daily Routine Recognition for Healthcare with Transfer Learning
Authors: Bruce X. B. Yu, Yan Liu, Keith C. C. Chan
Abstract:
We propose to record Activities of Daily Living (ADLs) of elderly people using a vision-based system so as to provide better assistive and personalization technologies. Current ADL-related research is based on data collected with help from non-elderly subjects in laboratory environments and the activities performed are predetermined for the sole purpose of data collection. To obtain more realistic datasets for the application, we recorded ADLs for the elderly with data collected from real-world environment involving real elderly subjects. Motivated by the need to collect data for more effective research related to elderly care, we chose to collect data in the room of an elderly person. Specifically, we installed Kinect, a vision-based sensor on the ceiling, to capture the activities that the elderly subject performs in the morning every day. Based on the data, we identified 12 morning activities that the elderly person performs daily. To recognize these activities, we created a HARELCARE framework to investigate into the effectiveness of existing Human Activity Recognition (HAR) algorithms and propose the use of a transfer learning algorithm for HAR. We compared the performance, in terms of accuracy, and training progress. Although the collected dataset is relatively small, the proposed algorithm has a good potential to be applied to all daily routine activities for healthcare purposes such as evidence-based diagnosis and treatment.Keywords: Daily activity recognition, healthcare, IoT sensors, transfer learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8912860 Resilient Manufacturing: Use of Augmented Reality to Advance Training and Operating Practices in Manual Assembly
Authors: L. C. Moreira, M. Kauffman
Abstract:
This paper outlines the results of an experimental research on deploying an emerging augmented reality (AR) system for real-time task assistance (or work instructions) of highly customised and high-risk manual operations. The focus is on human operators’ training effectiveness and performance and the aim is to test if such technologies can support enhancing the knowledge retention levels and accuracy of task execution to improve health and safety (H&S). An AR enhanced assembly method is proposed and experimentally tested using a real industrial process as case study for electric vehicles’ (EV) battery module assembly. The experimental results revealed that the proposed method improved the training practices and performance through increases in the knowledge retention levels from 40% to 84%, and accuracy of task execution from 20% to 71%, when compared to the traditional paper-based method. The results of this research validate and demonstrate how emerging technologies are advancing the choice for manual, hybrid or fully automated processes by promoting the XR-assisted processes, and the connected worker (a vision for Industry 4 and 5.0), and supporting manufacturing become more resilient in times of constant market changes.
Keywords: Augmented reality, extended reality, connected worker, XR-assisted operator, manual assembly 4.0, industry 5.0, smart training, battery assembly.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3772859 The Effects and Interactions of Synthesis Parameters on Properties of Mg Substituted Hydroxyapatite
Authors: S. Sharma, U. Batra, S. Kapoor, A. Dua
Abstract:
In this study, the effects and interactions of reaction time and capping agent assistance during sol-gel synthesis of magnesium substituted hydroxyapatite nanopowder (MgHA) on hydroxyapatite (HA) to β-tricalcium phosphate (β-TCP) ratio, Ca/P ratio and mean crystallite size was examined experimentally as well as through statistical analysis. MgHA nanopowders were synthesized by sol-gel technique at room temperature using aqueous solution of calcium nitrate tetrahydrate, magnesium nitrate hexahydrate and potassium dihydrogen phosphate as starting materials. The reaction time for sol-gel synthesis was varied between 15 to 60 minutes. Two process routes were followed with and without addition of triethanolamine (TEA) in the solutions. The elemental compositions of as-synthesized powders were determined using X-ray fluorescence (XRF) spectroscopy. The functional groups present in the assynthesized MgHA nanopowders were established through Fourier Transform Infrared Spectroscopy (FTIR). The amounts of phases present, Ca/P ratio and mean crystallite sizes of MgHA nanopowders were determined using X-ray diffraction (XRD). The HA content in biphasic mixture of HA and β-TCP and Ca/P ratio in as-synthesized MgHA nanopowders increased effectively with reaction time of sols (p<0.0001, two way ANOVA), however, these were independent of TEA addition (p>0.15, two way ANOVA). The MgHA nanopowders synthesized with TEA assistance exhibited 14 nm lower crystallite size (p<0.018, 2 sample t-test) compared to the powder synthesized without TEA assistance.Keywords: Capping agent, hydroxyapatite, regression analysis, sol-gel, 2- sample t-test, two-way ANOVA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16182858 Nuclear Medical Image Treatment System Based On FPGA in Real Time
Authors: B. Mahmoud, M.H. Bedoui, R. Raychev, H. Essabbah
Abstract:
We present in this paper an acquisition and treatment system designed for semi-analog Gamma-camera. It consists of a nuclear medical Image Acquisition, Treatment and Display chain(IATD) ensuring the acquisition, the treatment of the signals(resulting from the Gamma-camera detection head) and the scintigraphic image construction in real time. This chain is composed by an analog treatment board and a digital treatment board. We describe the designed systems and the digital treatment algorithms in which we have improved the performance and the flexibility. The digital treatment algorithms are implemented in a specific reprogrammable circuit FPGA (Field Programmable Gate Array).interface for semi-analog cameras of Sopha Medical Vision(SMVi) by taking as example SOPHY DS7. The developed system consists of an Image Acquisition, Treatment and Display (IATD) ensuring the acquisition and the treatment of the signals resulting from the DH. The developed chain is formed by a treatment analog board and a digital treatment board designed around a DSP [2]. In this paper we have presented the architecture of a new version of our chain IATD in which the integration of the treatment algorithms is executed on an FPGA (Field Programmable Gate Array)
Keywords: Nuclear medical image, scintigraphic image, digitaltreatment, linearity, spectrometry, FPGA.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16742857 MHD Boundary Layer Flow of a Nanofluid Past a Wedge Shaped Wick in Heat Pipe
Authors: Ziya Uddin
Abstract:
This paper deals with the theoretical and numerical investigation of magneto hydrodynamic boundary layer flow of a nanofluid past a wedge shaped wick in heat pipe used for the cooling of electronic components and different type of machines. To incorporate the effect of nanoparticle diameter, concentration of nanoparticles in the pure fluid, nanothermal layer formed around the nanoparticle and Brownian motion of nanoparticles etc., appropriate models are used for the effective thermal and physical properties of nanofluids. To model the rotation of nanoparticles inside the base fluid, microfluidics theory is used. In this investigation ethylene glycol (EG) based nanofluids, are taken into account. The non-linear equations governing the flow and heat transfer are solved by using a very effective particle swarm optimization technique along with Runge-Kutta method. The values of heat transfer coefficient are found for different parameters involved in the formulation viz. nanoparticle concentration, nanoparticle size, magnetic field and wedge angle etc. It is found that, the wedge angle, presence of magnetic field, nanoparticle size and nanoparticle concentration etc. have prominent effects on fluid flow and heat transfer characteristics for the considered configuration.
Keywords: Heat transfer, Heat pipe, numerical modeling, nanofluid applications, particle swarm optimization, wedge shaped wick.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2308