Search results for: sharp distance sensor
2696 Comparison of Power Generation Status of Photovoltaic Systems under Different Weather Conditions
Authors: Zhaojun Wang, Zongdi Sun, Qinqin Cui, Xingwan Ren
Abstract:
Based on multivariate statistical analysis theory, this paper uses the principal component analysis method, Mahalanobis distance analysis method and fitting method to establish the photovoltaic health model to evaluate the health of photovoltaic panels. First of all, according to weather conditions, the photovoltaic panel variable data are classified into five categories: sunny, cloudy, rainy, foggy, overcast. The health of photovoltaic panels in these five types of weather is studied. Secondly, a scatterplot of the relationship between the amount of electricity produced by each kind of weather and other variables was plotted. It was found that the amount of electricity generated by photovoltaic panels has a significant nonlinear relationship with time. The fitting method was used to fit the relationship between the amount of weather generated and the time, and the nonlinear equation was obtained. Then, using the principal component analysis method to analyze the independent variables under five kinds of weather conditions, according to the Kaiser-Meyer-Olkin test, it was found that three types of weather such as overcast, foggy, and sunny meet the conditions for factor analysis, while cloudy and rainy weather do not satisfy the conditions for factor analysis. Therefore, through the principal component analysis method, the main components of overcast weather are temperature, AQI, and pm2.5. The main component of foggy weather is temperature, and the main components of sunny weather are temperature, AQI, and pm2.5. Cloudy and rainy weather require analysis of all of their variables, namely temperature, AQI, pm2.5, solar radiation intensity and time. Finally, taking the variable values in sunny weather as observed values, taking the main components of cloudy, foggy, overcast and rainy weather as sample data, the Mahalanobis distances between observed value and these sample values are obtained. A comparative analysis was carried out to compare the degree of deviation of the Mahalanobis distance to determine the health of the photovoltaic panels under different weather conditions. It was found that the weather conditions in which the Mahalanobis distance fluctuations ranged from small to large were: foggy, cloudy, overcast and rainy.Keywords: fitting, principal component analysis, Mahalanobis distance, SPSS, MATLAB
Procedia PDF Downloads 1422695 Synthesis of MIPs towards Precursors and Intermediates of Illicit Drugs and Their following Application in Sensing Unit
Authors: K. Graniczkowska, N. Beloglazova, S. De Saeger
Abstract:
The threat of synthetic drugs is one of the most significant current drug problems worldwide. The use of drugs of abuse has increased dramatically during the past three decades. Among others, Amphetamine-Type Stimulants (ATS) are globally the second most widely used drugs after cannabis, exceeding the use of cocaine and heroin. ATS are potent central nervous system (CNS) stimulants, capable of inducing euphoric static similar to cocaine. Recreational use of ATS is widespread, even though warnings of irreversible damage of the CNS were reported. ATS pose a big problem and their production contributes to the pollution of the environment by discharging big volumes of liquid waste to sewage system. Therefore, there is a demand to develop robust and sensitive sensors that can detect ATS and their intermediates in environmental water samples. A rapid and simple test is required. Analysis of environmental water samples (which sometimes can be a harsh environment) using antibody-based tests cannot be applied. Therefore, molecular imprinted polymers (MIPs), which are known as synthetic antibodies, have been chosen for that approach. MIPs are characterized with a high mechanical and thermal stability, show chemical resistance in a broad pH range and various organic or aqueous solvents. These properties make them the preferred type of receptors for application in the harsh conditions imposed by environmental samples. To the best of our knowledge, there are no existing MIPs-based sensors toward amphetamine and its intermediates. Also not many commercial MIPs for this application are available. Therefore, the aim of this study was to compare different techniques to obtain MIPs with high specificity towards ATS and characterize them for following use in a sensing unit. MIPs against amphetamine and its intermediates were synthesized using a few different techniques, such as electro-, thermo- and UV-initiated polymerization. Different monomers, cross linkers and initiators, in various ratios, were tested to obtain the best sensitivity and polymers properties. Subsequently, specificity and selectivity were compared with commercially available MIPs against amphetamine. Different linkers, such as lipoic acid, 3-mercaptopioponic acid and tyramine were examined, in combination with several immobilization techniques, to select the best procedure for attaching particles on sensor surface. Performed experiments allowed choosing an optimal method for the intended sensor application. Stability of MIPs in extreme conditions, such as highly acidic or basic was determined. Obtained results led to the conclusion about MIPs based sensor applicability in sewage system testing.Keywords: amphetamine type stimulants, environment, molecular imprinted polymers, MIPs, sensor
Procedia PDF Downloads 2492694 Determination of Nanomolar Mercury (II) by Using Multi-Walled Carbon Nanotubes Modified Carbon Zinc/Aluminum Layered Double Hydroxide – 3 (4-Methoxyphenyl) Propionate Nanocomposite Paste Electrode
Authors: Illyas Md Isa, Sharifah Norain Mohd Sharif, Norhayati Hashima
Abstract:
A mercury(II) sensor was developed by using multi-walled carbon nanotubes (MWCNTs) paste electrode modified with Zn/Al layered double hydroxide-3(4-methoxyphenyl)propionate nanocomposite (Zn/Al-HMPP). The optimum conditions by cyclic voltammetry were observed at electrode composition 2.5% (w/w) of Zn/Al-HMPP/MWCNTs, 0.4 M potassium chloride, pH 4.0, and scan rate of 100 mVs-1. The sensor exhibited wide linear range from 1x10-3 M to 1x10-7 M Hg2+ and 1x10-7 M to 1x10-9 M Hg2+, with a detection limit of 1x10-10 M Hg2+. The high sensitivity of the proposed electrode towards Hg(II) was confirmed by double potential-step chronocoulometry which indicated these values; diffusion coefficient 1.5445 x 10-9 cm2 s-1, surface charge 524.5 µC s-½ and surface coverage 4.41 x 10-2 mol cm-2. The presence of 25-fold concentration of most metal ions had no influence on the anodic peak current. With characteristics such as high sensitivity, selectivity and repeatability the electrode was then proposed as the appropriate alternative for the determination of mercury(II).Keywords: cyclic voltammetry, mercury(II), modified carbon paste electrode, nanocomposite
Procedia PDF Downloads 3072693 Investigation of Static Stability of Soil Slopes Using Numerical Modeling
Authors: Seyed Abolhasan Naeini, Elham Ghanbari Alamooti
Abstract:
Static stability of soil slopes using numerical simulation by a finite element code, ABAQUS, has been investigated, and safety factors of the slopes achieved in the case of static load of a 10-storey building. The embankments have the same soil condition but different loading distance from the slope heel. The numerical method for estimating safety factors is 'Strength Reduction Method' (SRM). Mohr-Coulomb criterion used in the numerical simulations. Two steps used for measuring the safety factors of the slopes: first is under gravity loading, and the second is under static loading of a building near the slope heel. These safety factors measured from SRM, are compared with the values from Limit Equilibrium Method, LEM. Results show that there is good agreement between SRM and LEM. Also, it is seen that by increasing the distance from slope heel, safety factors increases.Keywords: limit equilibrium method, static stability, soil slopes, strength reduction method
Procedia PDF Downloads 1622692 Using Q Methodology to Capture Attitudes about Academic Resilience in an Online Postgraduate Psychology Course
Authors: Eleanor F. Willard
Abstract:
The attrition rate on distance learning courses can be high. This research examines how online students often react when faced with poor results. Using q methodology, it was found that the emotional response level and the type of social support sought by students were key influences on their attitude to failure. As educational and psychological researchers, we are adept at measuring learning and achievement, but examining attitudes towards barriers to learning are not so well researched. The distance learning student has differing needs from onsite learners and, as the attrition rate is notoriously high in the online student population, examining learners’ attitude towards adversity and barriers is important. Self-report measures such as questionnaires are useful in terms of ascertaining levels of constructs such as resilience and academic confidence. Interviewing, too, can gain in depth detail of the opinions of such a population, but only in individuals. The aim of this research was to ascertain what the feelings and attitudes of online students were when faced with a setback. This was achieved using q methodology due to its use of both quantitative and qualitative methodology and its suitability for exploratory research. The emphasis with this methodology is the attitudes, not the individuals. The work was focused upon a population of distance learning students who attended a school on site for one week as part of their studies. They were engaged in a psychology masters conversion course and, as such, were graduate students. The Q sort had 30 items taken from the Academic Resilience Scale (ARS-30). The scale items represent three constructs; perseverance, reflecting (including adaptive help-seeking) and negative affect. These are widely acknowledged as being relevant concepts underpinning psychological resilience. The q sort was conducted with 19 students in total. This is done by participants arranging statement cards regarding how similar to themselves they believe each statement to be. This was done after reading a vignette describing an experience of academic failure. Commonalities and differences between the sorts from all participants are then analyzed in terms of correlations and response patterns. Following data collection, the participants' responses were initially analyzed and the key perspectives (factors) to emerge were labelled ‘persevering individuals’ and ‘emotional networkers’. The differences between the two perspectives centre around the level of emotion felt when faced with barriers and the extent that students enlist the help of others inside and outside of the university. The dominant factor to emerge from the sorts of ‘persevering individuals’ demonstrated that many distance learners are tenacious. However, for other students, the level of emotional and social support is pivotal in helping them complete their studies when facing adversity. This was demonstrated by the ‘emotional networkers’ perspective. This research forms a starting point for further work on engaging and retaining online students at university and can potentially provide insight into how universities can lower attrition rates on distance learning courses.Keywords: academic resilience, distance learning, online learning, q methodology
Procedia PDF Downloads 1262691 Machine Learning and Internet of Thing for Smart-Hydrology of the Mantaro River Basin
Authors: Julio Jesus Salazar, Julio Jesus De Lama
Abstract:
the fundamental objective of hydrological studies applied to the engineering field is to determine the statistically consistent volumes or water flows that, in each case, allow us to size or design a series of elements or structures to effectively manage and develop a river basin. To determine these values, there are several ways of working within the framework of traditional hydrology: (1) Study each of the factors that influence the hydrological cycle, (2) Study the historical behavior of the hydrology of the area, (3) Study the historical behavior of hydrologically similar zones, and (4) Other studies (rain simulators or experimental basins). Of course, this range of studies in a certain basin is very varied and complex and presents the difficulty of collecting the data in real time. In this complex space, the study of variables can only be overcome by collecting and transmitting data to decision centers through the Internet of things and artificial intelligence. Thus, this research work implemented the learning project of the sub-basin of the Shullcas river in the Andean basin of the Mantaro river in Peru. The sensor firmware to collect and communicate hydrological parameter data was programmed and tested in similar basins of the European Union. The Machine Learning applications was programmed to choose the algorithms that direct the best solution to the determination of the rainfall-runoff relationship captured in the different polygons of the sub-basin. Tests were carried out in the mountains of Europe, and in the sub-basins of the Shullcas river (Huancayo) and the Yauli river (Jauja) with heights close to 5000 m.a.s.l., giving the following conclusions: to guarantee a correct communication, the distance between devices should not pass the 15 km. It is advisable to minimize the energy consumption of the devices and avoid collisions between packages, the distances oscillate between 5 and 10 km, in this way the transmission power can be reduced and a higher bitrate can be used. In case the communication elements of the devices of the network (internet of things) installed in the basin do not have good visibility between them, the distance should be reduced to the range of 1-3 km. The energy efficiency of the Atmel microcontrollers present in Arduino is not adequate to meet the requirements of system autonomy. To increase the autonomy of the system, it is recommended to use low consumption systems, such as the Ashton Raggatt McDougall or ARM Cortex L (Ultra Low Power) microcontrollers or even the Cortex M; and high-performance direct current (DC) to direct current (DC) converters. The Machine Learning System has initiated the learning of the Shullcas system to generate the best hydrology of the sub-basin. This will improve as machine learning and the data entered in the big data coincide every second. This will provide services to each of the applications of the complex system to return the best data of determined flows.Keywords: hydrology, internet of things, machine learning, river basin
Procedia PDF Downloads 1582690 Calculation of the Added Mass of a Submerged Object with Variable Sizes at Different Distances from the Wall via Lattice Boltzmann Simulations
Authors: Nastaran Ahmadpour Samani, Shahram Talebi
Abstract:
Added mass is an important quantity in analysis of the motion of a submerged object ,which can be calculated by solving the equation of potential flow around the object . Here, we consider systems in which a square object is submerged in a channel of fluid and moves parallel to the wall. The corresponding added mass at a given distance from the wall d and for the object size s (which is the side of square object) is calculated via lattice Blotzmann simulation . By changing d and s separately, their effect on the added mass is studied systematically. The simulation results reveal that for the systems in which d > 4s, the distance does not influence the added mass any more. The added mass increases when the object approaches the wall and reaches its maximum value as it moves on the wall (d -- > 0). In this case, the added mass is about 73% larger than which of the case d=4s. In addition, it is observed that the added mass increases by increasing of the object size s and vice versa.Keywords: Lattice Boltzmann simulation , added mass, square, variable size
Procedia PDF Downloads 4752689 ROSgeoregistration: Aerial Multi-Spectral Image Simulator for the Robot Operating System
Authors: Andrew R. Willis, Kevin Brink, Kathleen Dipple
Abstract:
This article describes a software package called ROS-georegistration intended for use with the robot operating system (ROS) and the Gazebo 3D simulation environment. ROSgeoregistration provides tools for the simulation, test, and deployment of aerial georegistration algorithms and is available at github.com/uncc-visionlab/rosgeoregistration. A model creation package is provided which downloads multi-spectral images from the Google Earth Engine database and, if necessary, incorporates these images into a single, possibly very large, reference image. Additionally a Gazebo plugin which uses the real-time sensor pose and image formation model to generate simulated imagery using the specified reference image is provided along with related plugins for UAV relevant data. The novelty of this work is threefold: (1) this is the first system to link the massive multi-spectral imaging database of Google’s Earth Engine to the Gazebo simulator, (2) this is the first example of a system that can simulate geospatially and radiometrically accurate imagery from multiple sensor views of the same terrain region, and (3) integration with other UAS tools creates a new holistic UAS simulation environment to support UAS system and subsystem development where real-world testing would generally be prohibitive. Sensed imagery and ground truth registration information is published to client applications which can receive imagery synchronously with telemetry from other payload sensors, e.g., IMU, GPS/GNSS, barometer, and windspeed sensor data. To highlight functionality, we demonstrate ROSgeoregistration for simulating Electro-Optical (EO) and Synthetic Aperture Radar (SAR) image sensors and an example use case for developing and evaluating image-based UAS position feedback, i.e., pose for image-based Guidance Navigation and Control (GNC) applications.Keywords: EO-to-EO, EO-to-SAR, flight simulation, georegistration, image generation, robot operating system, vision-based navigation
Procedia PDF Downloads 1002688 Electrical and Structural Properties of Polyaniline-Fullerene Nanocomposite
Authors: M. Nagaraja, H. M. Mahesh, K. Rajanna, M. Z. Kurian, J. Manjanna
Abstract:
In recent years, composites of conjugated polymers with fullerenes (C60) has attracted considerable scientific and technological attention in the field of organic electronics because they possess a novel combination of electrical, optical, ferromagnetic, mechanical and sensor properties. These properties represent major advances in the design of organic electronic devices. With the addition of C60 in the conjugated polymer matrix, the primary photo-excitation of the conjugated polymer undergoes an ultrafast electron transfer, and it has been demonstrated that fullerene molecules may serve as efficient electron acceptors in polymeric solar cells. The present paper includes the systematic studies on the effect of electrical, structural and sensor properties of polyaniline (PANI) matrix by the presence of C60. Polyaniline-fullerene (PANI/C60) composite is prepared by the introduction of fullerene during polymerization of aniline with ammonium persulfate and dodechyl benzene sulfonic acid as oxidant and dopant respectively. FTIR spectroscopy indicated the interaction between PANI and C60. X-ray diffraction proved the formation of a PANI/C60 complex. SEM image shows the highly branched chain structure of the PANI in the presence of C60. The conductivity of the PANI/C60 was found to be more than ten orders of magnitude over the pure PANI.Keywords: conductivity, fullerene, nanocomposite, polyaniline
Procedia PDF Downloads 2162687 Intelligent Technology for Real-Time Monitor and Data Analysis of the Aquaculture Toxic Water Concentration
Authors: Chin-Yuan Hsieh, Wei-Chun Lu, Yu-Hong Zeng
Abstract:
The situation of a group of fish die is frequently found due to the fish disease caused by the deterioration of aquaculture water quality. The toxic ammonia is produced by animals as a byproduct of protein. The system is designed by the smart sensor technology and developed by the mathematical model to monitor the water parameters 24 hours a day and predict the relationship among twelve water quality parameters for monitoring the water quality in aquaculture. All data measured are stored in cloud server. In productive ponds, the daytime pH may be high enough to be lethal to the fish. The sudden change of the aquaculture conditions often results in the increase of PH value of water, lack of oxygen dissolving content, water quality deterioration and yield reduction. From the real measurement, the system can send the message to user’s smartphone successfully on the bad conditions of water quality. From the data comparisons between measurement and model simulation in fish aquaculture site, the difference of parameters is less than 2% and the correlation coefficient is at least 98.34%. The solubility rate of oxygen decreases exponentially with the elevation of water temperature. The correlation coefficient is 98.98%.Keywords: aquaculture, sensor, ammonia, dissolved oxygen
Procedia PDF Downloads 2812686 Determination of Nanomolar Mercury (II) by Using Multi-Walled Carbon Nanotubes Modified Carbon Zinc/Aluminum Layered Double Hydroxide-3(4-Methoxyphenyl) Propionate Nanocomposite Paste Electrode
Authors: Illyas Md Isa, Sharifah Norain Mohd Sharif, Norhayati Hashim
Abstract:
A mercury(II) sensor was developed by using multi-walled carbon nano tubes (MWCNTs) paste electrode modified with Zn/Al layered double hydroxide-3(4-methoxyphenyl) propionate nano composite (Zn/Al-HMPP). The optimum conditions by cyclic voltammetry were observed at electrode composition 2.5% (w/w) of Zn/Al-HMPP/MWCNTs, 0.4 M potassium chloride, pH 4.0, and scan rate of 100 mVs-1. The sensor exhibited wide linear range from 1x10-3 M to 1x10-7 M Hg2+ and 1x10-7 M to 1x10-9 M Hg2+, with a detection limit of 1 x 10-10 M Hg2+. The high sensitivity of the proposed electrode towards Hg(II) was confirmed by double potential-step chronocoulometry which indicated these values; diffusion coefficient 1.5445 x 10-9 cm2 s-1, surface charge 524.5 µC s-½ and surface coverage 4.41 x 10-2 mol cm-2. The presence of 25-fold concentration of most metal ions had no influence on the anodic peak current. With characteristics such as high sensitivity, selectivity and repeatability the electrode was then proposed as the appropriate alternative for the determination of mercury.Keywords: Cyclic voltammetry, Mercury(II), Modified carbon paste electrode, Nanocomposite
Procedia PDF Downloads 4322685 Method for Auto-Calibrate Projector and Color-Depth Systems for Spatial Augmented Reality Applications
Authors: R. Estrada, A. Henriquez, R. Becerra, C. Laguna
Abstract:
Spatial Augmented Reality is a variation of Augmented Reality where the Head-Mounted Display is not required. This variation of Augmented Reality is useful in cases where the need for a Head-Mounted Display itself is a limitation. To achieve this, Spatial Augmented Reality techniques substitute the technological elements of Augmented Reality; the virtual world is projected onto a physical surface. To create an interactive spatial augmented experience, the application must be aware of the spatial relations that exist between its core elements. In this case, the core elements are referred to as a projection system and an input system, and the process to achieve this spatial awareness is called system calibration. The Spatial Augmented Reality system is considered calibrated if the projected virtual world scale is similar to the real-world scale, meaning that a virtual object will maintain its perceived dimensions when projected to the real world. Also, the input system is calibrated if the application knows the relative position of a point in the projection plane and the RGB-depth sensor origin point. Any kind of projection technology can be used, light-based projectors, close-range projectors, and screens, as long as it complies with the defined constraints; the method was tested on different configurations. The proposed procedure does not rely on a physical marker, minimizing the human intervention on the process. The tests are made using a Kinect V2 as an input sensor and several projection devices. In order to test the method, the constraints defined were applied to a variety of physical configurations; once the method was executed, some variables were obtained to measure the method performance. It was demonstrated that the method obtained can solve different arrangements, giving the user a wide range of setup possibilities.Keywords: color depth sensor, human computer interface, interactive surface, spatial augmented reality
Procedia PDF Downloads 1222684 A Minimum Spanning Tree-Based Method for Initializing the K-Means Clustering Algorithm
Authors: J. Yang, Y. Ma, X. Zhang, S. Li, Y. Zhang
Abstract:
The traditional k-means algorithm has been widely used as a simple and efficient clustering method. However, the algorithm often converges to local minima for the reason that it is sensitive to the initial cluster centers. In this paper, an algorithm for selecting initial cluster centers on the basis of minimum spanning tree (MST) is presented. The set of vertices in MST with same degree are regarded as a whole which is used to find the skeleton data points. Furthermore, a distance measure between the skeleton data points with consideration of degree and Euclidean distance is presented. Finally, MST-based initialization method for the k-means algorithm is presented, and the corresponding time complexity is analyzed as well. The presented algorithm is tested on five data sets from the UCI Machine Learning Repository. The experimental results illustrate the effectiveness of the presented algorithm compared to three existing initialization methods.Keywords: degree, initial cluster center, k-means, minimum spanning tree
Procedia PDF Downloads 4092683 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms
Authors: Mawloud Mosbah, Bachir Boucheham
Abstract:
Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance
Procedia PDF Downloads 3582682 QI Wireless Charging a Scope of Magnetic Inductive Coupling
Authors: Sreenesh Shashidharan, Umesh Gaikwad
Abstract:
QI or 'Chee' which is an interface standard for inductive electrical power transfer over distances of up to 4 cm (1.6 inches). The Qi system comprises a power transmission pad and a compatible receiver in a portable device which is placed on top of the power transmission pad, which charges using the principle of electromagnetic induction. An alternating current is passed through the transmitter coil, generating a magnetic field. This, in turn, induces a voltage in the receiver coil; this can be used to power a mobile device or charge a battery. The efficiency of the power transfer depends on the coupling (k) between the inductors and their quality (Q) The coupling is determined by the distance between the inductors (z) and the relative size (D2 /D). The coupling is further determined by the shape of the coils and the angle between them. If the receiver coil is at a certain distance to the transmitter coil, only a fraction of the magnetic flux, which is generated by the transmitter coil, penetrates the receiver coil and contributes to the power transmission. The more flux reaches the receiver, the better the coils are coupled.Keywords: inductive electric power, electromagnetic induction, magnetic flux, coupling
Procedia PDF Downloads 7302681 Phylogenetic Relationships of the Malaysian Primates Cercopithecine Based on COI Gene Sequences
Authors: B. M. Md-Zain, N. A. Rahman, M. A. B. Abdul-Latiff, W. M. R. Idris
Abstract:
We conducted molecular research to portray phylogenetic relationships of Malaysian primates particularly in the genus of Macaca. We have sequenced cytochrome C oxidase subunit I (COI) of mitochondrial DNA of several individuals from M. fascicularis and M. arctoides. PCR amplifications were performed and COI DNA sequences were aligned using ClustalW. Phylogenetic trees were constructed using distance analyses by employing neighbor-joining algorithm (NJ). We managed to sequence 700 bp of COI DNA sequences. The tree topology showed that M. fascicularis did not clump based on phyleogeography division in Peninsular Malaysia. Individuals from Negeri Sembilan merged together with samples from Perak and Penang into one clade. In addition, phylogenetic analyses indicated that M. arctoides was classified into sinica group instead of fascicularis group supported by genetic distance data. COI gene is an effective locus to clarify phylogenetic position of M. arctoides but not in discriminating M. fascicularis population in Peninsular Malaysia.Keywords: cercopithecine, long-tailed macaque, Macaca fascicularis, Macaca arctoides
Procedia PDF Downloads 3552680 An Analysis of Pick Travel Distances for Non-Traditional Unit Load Warehouses with Multiple P/D Points
Authors: Subir S. Rao
Abstract:
Existing warehouse configurations use non-traditional aisle designs with a central P/D point in their models, which is mathematically simple but less practical. Many warehouses use multiple P/D points to avoid congestion for pickers, and different warehouses have different flow policies and infrastructure for using the P/D points. Many warehouses use multiple P/D points with non-traditional aisle designs in their analytical models. Standard warehouse models introduce one-sided multiple P/D points in a flying-V warehouse and minimize pick distance for a one-way travel between an active P/D point and a pick location with P/D points, assuming uniform flow rates. A simulation of the mathematical model generally uses four fixed configurations of P/D points which are on two different sides of the warehouse. It can be easily proved that if the source and destination P/D points are both chosen randomly, in a uniform way, then minimizing the one-way travel is the same as minimizing the two-way travel. Another warehouse configuration analytically models the warehouse for multiple one-sided P/D points while keeping the angle of the cross-aisles and picking aisles as a decision variable. The minimization of the one-way pick travel distance from the P/D point to the pick location by finding the optimal position/angle of the cross-aisle and picking aisle for warehouses having different numbers of multiple P/D points with variable flow rates is also one of the objectives. Most models of warehouses with multiple P/D points are one-way travel models and we extend these analytical models to minimize the two-way pick travel distance wherein the destination P/D is chosen optimally for the return route, which is not similar to minimizing the one-way travel. In most warehouse models, the return P/D is chosen randomly, but in our research, the return route P/D point is chosen optimally. Such warehouses are common in practice, where the flow rates at the P/D points are flexible and depend totally on the position of the picks. A good warehouse management system is efficient in consolidating orders over multiple P/D points in warehouses where the P/D is flexible in function. In the latter arrangement, pickers and shrink-wrap processes are not assigned to particular P/D points, which ultimately makes the P/D points more flexible and easy to use interchangeably for picking and deposits. The number of P/D points considered in this research uniformly increases from a single-central one to a maximum of each aisle symmetrically having a P/D point below it.Keywords: non-traditional warehouse, V cross-aisle, multiple P/D point, pick travel distance
Procedia PDF Downloads 392679 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 682678 Optimal Sensing Technique for Estimating Stress Distribution of 2-D Steel Frame Structure Using Genetic Algorithm
Authors: Jun Su Park, Byung Kwan Oh, Jin Woo Hwang, Yousok Kim, Hyo Seon Park
Abstract:
For the structural safety, the maximum stress calculated from the stress distribution of a structure is widely used. The stress distribution can be estimated by deformed shape of the structure obtained from measurement. Although the estimation of stress is strongly affected by the location and number of sensing points, most studies have conducted the stress estimation without reasonable basis on sensing plan such as the location and number of sensors. In this paper, an optimal sensing technique for estimating the stress distribution is proposed. This technique proposes the optimal location and number of sensing points for a 2-D frame structure while minimizing the error of stress distribution between analytical model and estimation by cubic smoothing splines using genetic algorithm. To verify the proposed method, the optimal sensor measurement technique is applied to simulation tests on 2-D steel frame structure. The simulation tests are performed under various loading scenarios. Through those tests, the optimal sensing plan for the structure is suggested and verified.Keywords: genetic algorithm, optimal sensing, optimizing sensor placements, steel frame structure
Procedia PDF Downloads 5292677 Assessment of Planet Image for Land Cover Mapping Using Soft and Hard Classifiers
Authors: Lamyaa Gamal El-Deen Taha, Ashraf Sharawi
Abstract:
Planet image is a new data source from planet lab. This research is concerned with the assessment of Planet image for land cover mapping. Two pixel based classifiers and one subpixel based classifier were compared. Firstly, rectification of Planet image was performed. Secondly, a comparison between minimum distance, maximum likelihood and neural network classifications for classification of Planet image was performed. Thirdly, the overall accuracy of classification and kappa coefficient were calculated. Results indicate that neural network classification is best followed by maximum likelihood classifier then minimum distance classification for land cover mapping.Keywords: planet image, land cover mapping, rectification, neural network classification, multilayer perceptron, soft classifiers, hard classifiers
Procedia PDF Downloads 1852676 Corrosion of Steel in Relation with Hydrogen Activity of Concentrated HClO4 Media: Realisation Sensor and Reference Electrode
Authors: B. Hammouti, H. Oudda, A. Benabdellah, A. Benayada, A. Aouniti
Abstract:
Corrosion behaviour of carbon steel was studied in various concentrated HClO4 solutions. To explain the acid attack in relation of H+ activity, new sensor was realised: two carbon paste electrodes (CPE) were constructed by incorporating ferrocene (Fc) and orthoquinone into the carbon paste matrix and crossed by weak current to stabilize potential difference. The potentiometric method at imposed weak current between these two electrodes permits the in situ determination of both concentration and acidity level of various concentrated HClO4 solutions. The different factors affecting the potential at imposed current as current intensity, temperature and H+ ion concentration are studied. The potentials measured between ferrocene and chloranil electrodes are directly linked to the acid concentration. The acidity Ri(H) function defined represents the determination of the H+ activity and constitutes the extend of pH is concentrated acid solutions. Ri(H) has been determined and compared to Strehlow Ro(H), Janata HGF and Hammett Ho functions. The collected data permit to give a scale of strength of mineral concentrated acids at a given concentration. Ri(H) is numerically equal to the thermodynamic Ro(H), but deviated from Hammett functions based on indicator determination. The CPE electrode with inserted ferrocene in presence of ferricinium (Fc+) ion in concentrated HClO4 at various concentrations is realized without junction potential and may plays the role of a practical reference electrode (FRE) in concentrated acids. Fc+ was easily prepared in biphasic medium HClO4-acid by the quantitative oxidation of ferrocene by the ortho-chloranil (oQ). Potential of FRE is stable with time. The variation of equilibrium potential of the interface Fc/ Fc+ at various concentrations of Fc+ (10-4 - 2 10-2 M) obeyed to the Nernst equation with a slope 0.059 Volt per decade. Corrosion rates obtained by weight loss and electrochemical techniques were then easily linked to acidity level.Keywords: ferrocene, strehlow, concentrated acid, corrosion, Generalised pH, sensor carbon paste electrode
Procedia PDF Downloads 3542675 Functional Poly(Hedral Oligomeric Silsesquioxane) Nano-Spacer to Boost Quantum Resistive Vapour Sensors’ Sensitivity and Selectivity
Authors: Jean-Francois Feller
Abstract:
The analysis of the volatolome emitted by the human body with a sensor array (e-nose) is a method for clinical applications full of promises to make an olfactive fingerprint characteristic of people's health state. But the amount of volatile organic compounds (VOC) to detect, being in the range of parts per billion (ppb), and their diversity (several hundred) justifies developing ever more sensitive and selective vapor sensors to improve the discrimination ability of the e-nose, is still of interest. Quantum resistive vapour sensors (vQRS) made with nanostructured conductive polymer nanocomposite transducers have shown a great versatility in both their fabrication and operation to detect volatiles of interest such as cancer biomarkers. However, it has been shown that their chemo-resistive response was highly dependent on the quality of the inter-particular junctions in the percolated architecture. The present work investigates the effectiveness of poly(hedral oligomeric silsesquioxane) acting as a nanospacer to amplify the disconnectability of the conducting network and thus maximize the vQRS's sensitivity to VOC.Keywords: volatolome, quantum resistive vapour sensor, nanostructured conductive polymer nanocomposites, olfactive diagnosis
Procedia PDF Downloads 192674 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 1942673 Designing and Analyzing Sensor and Actuator of a Nano/Micro-System for Fatigue and Fracture Characterization of Nanomaterials
Authors: Mohammad Reza Zamani Kouhpanji
Abstract:
This paper presents a MEMS/NEMS device for fatigue and fracture characterization of nanomaterials. This device can apply static loads, cyclic loads, and their combinations in nanomechanical experiments. It is based on the electromagnetic force induced between paired parallel wires carrying electrical currents. Using this concept, the actuator and sensor parts of the device were designed and analyzed while considering the practical limitations. Since the PWCC device only uses two wires for actuation part and sensing part, its fabrication process is extremely easier than the available MEMS/NEMS devices. The total gain and phase shift of the MEMS/NEMS device were calculated and investigated. Furthermore, the maximum gain and sensitivity of the MEMS/NEMS device were studied to demonstrate the capability and usability of the device for wide range of nanomaterials samples. This device can be readily integrated into SEM/TEM instruments to provide real time study of the mechanical behaviors of nanomaterials as well as their fatigue and fracture properties, softening or hardening behaviors, and initiation and propagation of nanocracks.Keywords: sensors and actuators, MEMS/NEMS devices, fatigue and fracture nanomechanical testing device, static and cyclic nanomechanical testing device
Procedia PDF Downloads 2962672 Printed Electronics for Enhanced Monitoring of Organ-on-Chip Culture Media Parameters
Authors: Alejandra Ben-Aissa, Martina Moreno, Luciano Sappia, Paul Lacharmoise, Ana Moya
Abstract:
Organ-on-Chip (OoC) stands out as a highly promising approach for drug testing, presenting a cost-effective and ethically superior alternative to conventional in vivo experiments. These cutting-edge devices emerge from the integration of tissue engineering and microfluidic technology, faithfully replicating the physiological conditions of targeted organs. Consequently, they offer a more precise understanding of drug responses without the ethical concerns associated with animal testing. When addressing the limitations of OoC due to conventional and time-consuming techniques, Lab-On-Chip (LoC) emerge as a disruptive technology capable of providing real-time monitoring without compromising sample integrity. This work develops LoC platforms that can be integrated within OoC platforms to monitor essential culture media parameters, including glucose, oxygen, and pH, facilitating the straightforward exchange of sensing units within a dynamic and controlled environment without disrupting cultures. This approach preserves the experimental setup, minimizes the impact on cells, and enables efficient, prolonged measurement. The LoC system is fabricated following the patented methodology protected by EU patent EP4317957A1. One of the key challenges of integrating sensors in a biocompatible, feasible, robust, and scalable manner is addressed through fully printed sensors, ensuring a customized, cost-effective, and scalable solution. With this technique, sensor reliability is enhanced, providing high sensitivity and selectivity for accurate parameter monitoring. In the present study, LoC is validated measuring a complete culture media. The oxygen sensor provided a measurement range from 0 mgO2/L to 6.3 mgO2/L. The pH sensor demonstrated a measurement range spanning 2 pH units to 9.5 pH units. Additionally, the glucose sensor achieved a measurement range from 0 mM to 11 mM. All the measures were performed with the sensors integrated in the LoC. In conclusion, this study showcases the impactful synergy of OoC technology with LoC systems using fully printed sensors, marking a significant step forward in ethical and effective biomedical research, particularly in drug development. This innovation not only meets current demands but also lays the groundwork for future advancements in precision and customization within scientific exploration.Keywords: organ on chip, lab on chip, real time monitoring, biosensors
Procedia PDF Downloads 112671 Advocating for Those with Limited Mobility
Authors: Dorothy I. Riddle
Abstract:
Limited mobility (or an inability to walk more than 15 meters without sitting down to rest) restricts full community participation for 13 percent of Canadian adults or 4.2 million persons), yet Canadian accessibility standards are silent on distance to be walked as an accessibility barrier to be addressed. Instead, they focus on ensuring access for the wheeled mobility devices used regularly by le The Accessible Canada Act mandates that Canada be barrier free by 2040, which will necessitate eliminating distance to be walked as a barrier in federal programs and services. This paper details the results of a multi-year research project funded by Accessibility Standards Canada to document the lived experience of those struggling with limited mobility and make recommendations regarding how to ensure accessibility for those with limited mobility. Over 2,600 Canadians from across Canada participated in an online survey and follow-up focus groups. The results underscored the importance of providing not only mobility supports in public facilities but also the information necessary for planning access to federal programs and services. As numerous participants indicated, if they weren’t sure how far they would have to walk, they simply stayed home and depended on friends and relatives for help with errands or appointments. This included failing to participate in civic activities, such as voting, for fear of having to walk too far and stand unsupported for too long. Types of information that were deemed critical included whether or not mobility aids were available, where seating to rest was located throughout the facility, what alternatives to standing while waiting for service and having to walk to the service provider (rather than the provider coming to the customer) were available, and diagrams of accessible parking and its relationship to elevators and services.Keywords: accessibility standards, distance to be walked, limited mobility, mobility aids, service to customer
Procedia PDF Downloads 802670 The Cloud Systems Used in Education: Properties and Overview
Authors: Agah Tuğrul Korucu, Handan Atun
Abstract:
Diversity and usefulness of information that used in education are have increased due to development of technology. Web technologies have made enormous contributions to the distance learning system especially. Mobile systems, one of the most widely used technology in distance education, made much easier to access web technologies. Not bounding by space and time, individuals have had the opportunity to access the information on web. In addition to this, the storage of educational information and resources and accessing these information and resources is crucial for both students and teachers. Because of this importance, development and dissemination of web technologies supply ease of access to information and resources are provided by web technologies. Dynamic web technologies introduced as new technologies that enable sharing and reuse of information, resource or applications via the Internet and bring websites into expandable platforms are commonly known as Web 2.0 technologies. Cloud systems are one of the dynamic web technologies that defined as a model provides approaching the demanded information independent from time and space in appropriate circumstances and developed by NIST. One of the most important advantages of cloud systems is meeting the requirements of users directly on the web regardless of hardware, software, and dealing with install. Hence, this study aims at using cloud services in education and investigating the services provided by the cloud computing. Survey method has been used as research method. In the findings of this research the fact that cloud systems are used such studies as resource sharing, collaborative work, assignment submission and feedback, developing project in the field of education, and also, it is revealed that cloud systems have plenty of significant advantages in terms of facilitating teaching activities and the interaction between teacher, student and environment.Keywords: cloud systems, cloud systems in education, online learning environment, integration of information technologies, e-learning, distance learning
Procedia PDF Downloads 3472669 Integration of GIS with Remote Sensing and GPS for Disaster Mitigation
Authors: Sikander Nawaz Khan
Abstract:
Natural disasters like flood, earthquake, cyclone, volcanic eruption and others are causing immense losses to the property and lives every year. Current status and actual loss information of natural hazards can be determined and also prediction for next probable disasters can be made using different remote sensing and mapping technologies. Global Positioning System (GPS) calculates the exact position of damage. It can also communicate with wireless sensor nodes embedded in potentially dangerous places. GPS provide precise and accurate locations and other related information like speed, track, direction and distance of target object to emergency responders. Remote Sensing facilitates to map damages without having physical contact with target area. Now with the addition of more remote sensing satellites and other advancements, early warning system is used very efficiently. Remote sensing is being used both at local and global scale. High Resolution Satellite Imagery (HRSI), airborne remote sensing and space-borne remote sensing is playing vital role in disaster management. Early on Geographic Information System (GIS) was used to collect, arrange, and map the spatial information but now it has capability to analyze spatial data. This analytical ability of GIS is the main cause of its adaption by different emergency services providers like police and ambulance service. Full potential of these so called 3S technologies cannot be used in alone. Integration of GPS and other remote sensing techniques with GIS has pointed new horizons in modeling of earth science activities. Many remote sensing cases including Asian Ocean Tsunami in 2004, Mount Mangart landslides and Pakistan-India earthquake in 2005 are described in this paper.Keywords: disaster mitigation, GIS, GPS, remote sensing
Procedia PDF Downloads 4792668 Optimal Concentration of Fluorescent Nanodiamonds in Aqueous Media for Bioimaging and Thermometry Applications
Authors: Francisco Pedroza-Montero, Jesús Naín Pedroza-Montero, Diego Soto-Puebla, Osiris Alvarez-Bajo, Beatriz Castaneda, Sofía Navarro-Espinoza, Martín Pedroza-Montero
Abstract:
Nanodiamonds have been widely studied for their physical properties, including chemical inertness, biocompatibility, optical transparency from the ultraviolet to the infrared region, high thermal conductivity, and mechanical strength. In this work, we studied how the fluorescence spectrum of nanodiamonds quenches concerning the concentration in aqueous solutions systematically ranging from 0.1 to 10 mg/mL. Our results demonstrated a non-linear fluorescence quenching as the concentration increases for both of the NV zero-phonon lines; the 5 mg/mL concentration shows the maximum fluorescence emission. Furthermore, this behaviour is theoretically explained as an electronic recombination process that modulates the intensity in the NV centres. Finally, to gain more insight, the FRET methodology is used to determine the fluorescence efficiency in terms of the fluorophores' separation distance. Thus, the concentration level is simulated as follows, a small distance between nanodiamonds would be considered a highly concentrated system, whereas a large distance would mean a low concentrated one. Although the 5 mg/mL concentration shows the maximum intensity, our main interest is focused on the concentration of 0.5 mg/mL, which our studies demonstrate the optimal human cell viability (99%). In this respect, this concentration has the feature of being as biocompatible as water giving the possibility to internalize it in cells without harming the living media. To this end, not only can we track nanodiamonds on the surface or inside the cell with excellent precision due to their fluorescent intensity, but also, we can perform thermometry tests transforming a fluorescence contrast image into a temperature contrast image.Keywords: nanodiamonds, fluorescence spectroscopy, concentration, bioimaging, thermometry
Procedia PDF Downloads 4032667 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations
Authors: Karthikeyan Kalirajan, Ashok Joshi
Abstract:
An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection
Procedia PDF Downloads 424