Search results for: mechanical and thermal properties
90 Lead and Cadmium Spatial Pattern and Risk Assessment around Coal Mine in Hyrcanian Forest, North Iran
Authors: Mahsa Tavakoli, Seyed Mohammad Hojjati, Yahya Kooch
Abstract:
In this study, the effect of coal mining activities on lead and cadmium concentrations and distribution in soil was investigated in Hyrcanian forest, North Iran. 16 plots (20×20 m2) were established by systematic-randomly (60×60 m2) in an area of 4 ha (200×200 m2-mine entrance placed at center). An area adjacent to the mine was not affected by the mining activity; considered as the controlled area. In order to investigate soil lead and cadmium concentration, one sample was taken from the 0-10 cm in each plot. To study the spatial pattern of soil properties and lead and cadmium concentrations in the mining area, an area of 80×80m2 (the mine as the center) was considered and 80 soil samples were systematic-randomly taken (10 m intervals). Geostatistical analysis was performed via Kriging method and GS+ software (version 5.1). In order to estimate the impact of coal mining activities on soil quality, pollution index was measured. Lead and cadmium concentrations were significantly higher in mine area (Pb: 10.97±0.30, Cd: 184.47±6.26 mg.kg-1) in comparison to control area (Pb: 9.42±0.17, Cd: 131.71±15.77 mg.kg-1). The mean values of the PI index indicate that Pb (1.16) and Cd (1.77) presented slightly polluted. Results of the NIPI index showed that Pb (1.44) and Cd (2.52) presented slight pollution and moderate pollution respectively. Results of variography and kriging method showed that it is possible to prepare interpolation maps of lead and cadmium around the mining areas in Hyrcanian forest. According to results of pollution and risk assessments, forest soil was contaminated by heavy metals (lead and cadmium); therefore, using reclamation and remediation techniques in these areas is necessary.
Keywords: Traditional coal mining, heavy metals, pollution indicators, geostatistics, caspian forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 105189 The Effect of Cyclic Speed on the Wear Properties of Molybdenum Disulfide Greases under Extreme Pressure Loading Using 4 Balls Wear Tests
Authors: Gabi Nehme
Abstract:
The relationship between different types of Molybdenum disulfide greases under extreme pressure loading and different speed situations have been studied using Design of Experiment (DOE) under 1200rpm steady state rotational speed and cyclic frequencies between 2400 and 1200rpm using a Plint machine software to set up the different rotational speed situations. Research described here is aimed at providing good friction and wear performance while optimizing cyclic frequencies and MoS2 concentration due to the recent concern about grease behavior in extreme pressure applications. Extreme load of 785 Newton was used in conjunction with different cyclic frequencies (2400rpm -3.75min, 1200rpm -7.5min, 2400rpm -3.75min, 1200rpm -7.5min), to examine lithium based grease with and without MoS2 for equal number of revolutions, and a total run of 36000 revolutions; then compared to 1200rpm steady speed for the same total number of revolutions. 4 Ball wear tester was utilized to run large number of experiments randomly selected by the DOE software. The grease was combined with fine grade MoS2 or technical grade then heated to 750C and the wear scar width was collected at the end of each test. DOE model validation results verify that the data were very significant and can be applied to a wide range of extreme pressure applications. Based on simulation results and Scanning Electron images (SEM), it has been found that wear was largely dependent on the cyclic frequency condition. It is believed that technical grade MoS2 greases under faster cyclic speeds perform better and provides antiwear film that can resist extreme pressure loadings. Figures showed reduced wear scars width and improved frictional values.
Keywords: MoS2 grease, wear, friction, extreme load, cyclic frequencies, aircraft grade bearing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 189888 Risk Assessment of Trace Element Pollution in Gymea Bay, NSW, Australia
Authors: Yasir M. Alyazichi, Brian G. Jones, Errol McLean, Hamd N. Altalyan, Ali K. M. Al-Nasrawi
Abstract:
The main purpose of this study is to assess the sediment quality and potential ecological risk in marine sediments in Gymea Bay located in south Sydney, Australia. A total of 32 surface sediment samples were collected from the bay. Current track trajectories and velocities have also been measured in the bay. The resultant trace elements were compared with the adverse biological effect values Effect Range Low (ERL) and Effect Range Median (ERM) classifications. The results indicate that the average values of chromium, arsenic, copper, zinc, and lead in surface sediments all reveal low pollution levels and are below ERL and ERM values. The highest concentrations of trace elements were found close to discharge points and in the inner bay, and were linked with high percentages of clay minerals, pyrite and organic matter, which can play a significant role in trapping and accumulating these elements. The lowest concentrations of trace elements were found to be on the shoreline of the bay, which contained high percentages of sand fractions. It is postulated that the fine particles and trace elements are disturbed by currents and tides, then transported and deposited in deeper areas. The current track velocities recorded in Gymea Bay had the capability to transport fine particles and trace element pollution within the bay. As a result, hydrodynamic measurements were able to provide useful information and to help explain the distribution of sedimentary particles and geochemical properties. This may lead to knowledge transfer to other bay systems, including those in remote areas. These activities can be conducted at a low cost, and are therefore also transferrable to developing countries. The advent of portable instruments to measure trace elements in the field has also contributed to the development of these lower cost and easily applied methodologies available for use in remote locations and low-cost economies.Keywords: Current track velocities, Gymea Bay, surface sediments, trace elements.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 210787 ELISA Based hTSH Assessment Using Two Sensitive and Specific Anti-hTSH Polyclonal Antibodies
Authors: Maysam Mard-Soltani, Mohamad Javad Rasaee, Saeed Khalili, Abdol Karim Sheikhi, Mehdi Hedayati
Abstract:
Production of specific antibody responses against hTSH is a cumbersome process due to the high identity between the hTSH and the other members of the glycoprotein hormone family (FSH, LH and HCG) and the high identity between the human hTSH and host animals for antibody production. Therefore, two polyclonal antibodies were purified against two recombinant proteins. Four possible ELISA tests were designed based on these antibodies. These ELISA tests were checked against hTSH and other glycoprotein hormones, and their sensitivity and specificity were assessed. Bioinformatics tools were used to analyze the immunological properties. After the immunogen region selection from hTSH protein, c terminal of B hTSH was selected and applied. Two recombinant genes, with these cut pieces (first: two repeats of C terminal of B hTSH, second: tetanous toxin+B hTSH C terminal), were designed and sub-cloned into the pET32a expression vector. Standard methods were used for protein expression, purification, and verification. Thereafter, immunizations of the white New Zealand rabbits were performed and the serums of them were used for antibody titration, purification and characterization. Then, four ELISA tests based on two antibodies were employed to assess the hTSH and other glycoprotein hormones. The results of these assessments were compared with standard amounts. The obtained results indicated that the desired antigens were successfully designed, sub-cloned, expressed, confirmed and used for in vivo immunization. The raised antibodies were capable of specific and sensitive hTSH detection, while the cross reactivity with the other members of the glycoprotein hormone family was minimum. Among the four designed tests, the test in which the antibody against first protein was used as capture antibody, and the antibody against second protein was used as detector antibody did not show any hook effect up to 50 miu/l. Both proteins have the ability to induce highly sensitive and specific antibody responses against the hTSH. One of the antibody combinations of these antibodies has the highest sensitivity and specificity in hTSH detection.
Keywords: hTSH, bioinformatics, protein expression, cross reactivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 120086 A Numerical Model for Simulation of Blood Flow in Vascular Networks
Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia
Abstract:
An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.
Keywords: Blood flow, Morphometric data, Vascular tree, Strahler ordering system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 210185 Soil Quality State and Trends in New Zealand’s Largest City after 15 Years
Authors: Fiona Curran-Cournane
Abstract:
Soil quality monitoring is a science-based soil management tool that assesses soil ecosystem health. A soil monitoring program in Auckland, New Zealand’s largest city extends from 1995 to the present. The objective of this study was to firstly determine changes in soil parameters (basic soil properties and heavy metals) that were assessed from rural land in 1995-2000 and repeated in 2008-2012. The second objective was to determine differences in soil parameters across various land uses including native bush, rural (horticulture, pasture and plantation forestry) and urban land uses using soil data collected in more recent years (2009- 2013). Across rural land, mean concentrations of Olsen P had significantly increased in the second sampling period and was identified as the indicator of most concern, followed by soil macroporosity, particularly for horticultural and pastoral land. Mean concentrations of Cd were also greatest for pastoral and horticultural land and a positive correlation existed between these two parameters, which highlights the importance of analysing basic soil parameters in conjunction with heavy metals. In contrast, mean concentrations of As, Cr, Pb, Ni and Zn were greatest for urban sites. Native bush sites had the lowest concentrations of heavy metals and were used to calculate a ‘pollution index’ (PI). The mean PI was classified as high (PI > 3) for Cd and Ni and moderate for Pb, Zn, Cr, Cu, As and Hg, indicating high levels of heavy metal pollution across both rural and urban soils. From a land use perspective, the mean ‘integrated pollution index’ was highest for urban sites at 2.9 followed by pasture, horticulture and plantation forests at 2.7, 2.6 and 0.9, respectively. It is recommended that soil sampling continues over time because a longer spanning record will allow further identification of where soil problems exist and where resources need to be targeted in the future. Findings from this study will also inform policy and science direction in regional councils.
Keywords: Heavy metals, Pollution Index, Rural and Urban land use.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 221084 Impact of Fluid Flow Patterns on Metastable Zone Width of Borax in Dual Radial Impeller Crystallizer at Different Impeller Spacings
Authors: A. Čelan, M. Ćosić, D. Rušić, N. Kuzmanić
Abstract:
Conducting crystallization in an agitated vessel requires a proper selection of mixing parameters that would result in a production of crystals of specific properties. In dual impeller systems, which are characterized by a more complex hydrodynamics due to the possible fluid flow interactions, revealing a clear link between mixing parameters and crystallization kinetics is still an open issue. The aim of this work is to establish this connection by investigating how fluid flow patterns, generated by two impellers mounted on the same shaft, reflect on metastable zone width of borax decahydrate, one of the most important parameters of the crystallization process. Investigation was carried out in a 15-dm3 bench scale batch cooling crystallizer with an aspect ratio (H/T) equal to 1.3. For this reason, two radial straight blade turbines (4-SBT) were used for agitation. Experiments were conducted at different impeller spacings at the state of complete suspension. During the process of an unseeded batch cooling crystallization, solution temperature and supersaturation were continuously monitored what enabled a determination of the metastable zone width. Hydrodynamic conditions in the vessel achieved at different impeller spacings investigated were analyzed in detail. This was done firstly by measuring the mixing time required to attain the desired level of homogeneity. Secondly, fluid flow patterns generated in a described dual impeller system were both photographed and simulated by VisiMix Turbulent software. Also, a comparison of these two visualization methods was performed. Experimentally obtained results showed that metastable zone width is definitely affected by the hydrodynamics in the crystallizer. This means that this crystallization parameter can be controlled not only by adjusting the saturation temperature or cooling rate, as is usually done, but also by choosing a suitable impeller spacing that will result in a formation of crystals of wanted size distribution.
Keywords: Dual impeller crystallizer, fluid flow pattern, metastable zone width, mixing time, radial impeller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87183 The Effect of Discontinued Water Spray Cooling on the Heat Transfer Coefficient
Authors: J. Hrabovský, M. Chabičovský, J. Horský
Abstract:
Water spray cooling is a technique typically used in heat treatment and other metallurgical processes where controlled temperature regimes are required. Water spray cooling is used in static (without movement) or dynamic (with movement of the steel plate) regimes. The static regime is notable for the fixed position of the hot steel plate and fixed spray nozzle. This regime is typical for quenching systems focused on heat treatment of the steel plate. The second application of spray cooling is the dynamic regime. The dynamic regime is notable for its static section cooling system and moving steel plate. This regime is used in rolling and finishing mills. The fixed position of cooling sections with nozzles and the movement of the steel plate produce nonhomogeneous water distribution on the steel plate. The length of cooling sections and placement of water nozzles in combination with the nonhomogeneity of water distribution lead to discontinued or interrupted cooling conditions. The impact of static and dynamic regimes on cooling intensity and the heat transfer coefficient during the cooling process of steel plates is an important issue. Heat treatment of steel is accompanied by oxide scale growth. The oxide scale layers can significantly modify the cooling properties and intensity during the cooling. The combination of static and dynamic (section) regimes with the variable thickness of the oxide scale layer on the steel surface impact the final cooling intensity. The study of the influence of the oxide scale layers with different cooling regimes was carried out using experimental measurements and numerical analysis. The experimental measurements compared both types of cooling regimes and the cooling of scale-free surfaces and oxidized surfaces. A numerical analysis was prepared to simulate the cooling process with different conditions of the section and samples with different oxide scale layers.
Keywords: Heat transfer coefficient, numerical analysis, oxide layer, spray cooling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 297882 Resting-State Functional Connectivity Analysis Using an Independent Component Approach
Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi
Abstract:
Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as Independent Component Analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.
Keywords: Independent Component Analysis, Resting State Network, refractory epilepsy, rsfMRI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29181 Design, Fabrication and Evaluation of MR Damper
Authors: A. Ashfak, A. Saheed, K. K. Abdul Rasheed, J. Abdul Jaleel
Abstract:
This paper presents the design, fabrication and evaluation of magneto-rheological damper. Semi-active control devices have received significant attention in recent years because they offer the adaptability of active control devices without requiring the associated large power sources. Magneto-Rheological (MR) dampers are semi- active control devices that use MR fluids to produce controllable dampers. They potentially offer highly reliable operation and can be viewed as fail-safe in that they become passive dampers if the control hardware malfunction. The advantage of MR dampers over conventional dampers are that they are simple in construction, compromise between high frequency isolation and natural frequency isolation, they offer semi-active control, use very little power, have very quick response, has few moving parts, have a relax tolerances and direct interfacing with electronics. Magneto- Rheological (MR) fluids are Controllable fluids belonging to the class of active materials that have the unique ability to change dynamic yield stress when acted upon by an electric or magnetic field, while maintaining viscosity relatively constant. This property can be utilized in MR damper where the damping force is changed by changing the rheological properties of the fluid magnetically. MR fluids have a dynamic yield stress over Electro-Rheological fluids (ER) and a broader operational temperature range. The objective of this papert was to study the application of an MR damper to vibration control, design the vibration damper using MR fluids, test and evaluate its performance. In this paper the Rheology and the theory behind MR fluids and their use on vibration control were studied. Then a MR vibration damper suitable for vehicle suspension was designed and fabricated using the MR fluid. The MR damper was tested using a dynamic test rig and the results were obtained in the form of force vs velocity and the force vs displacement plots. The results were encouraging and greatly inspire further research on the topic.Keywords: Magneto-rheological Fluid, MR Damper, Semiactive controller, Electro-rheological fluid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595680 Automatic Distance Compensation for Robust Voice-based Human-Computer Interaction
Authors: Randy Gomez, Keisuke Nakamura, Kazuhiro Nakadai
Abstract:
Distant-talking voice-based HCI system suffers from performance degradation due to mismatch between the acoustic speech (runtime) and the acoustic model (training). Mismatch is caused by the change in the power of the speech signal as observed at the microphones. This change is greatly influenced by the change in distance, affecting speech dynamics inside the room before reaching the microphones. Moreover, as the speech signal is reflected, its acoustical characteristic is also altered by the room properties. In general, power mismatch due to distance is a complex problem. This paper presents a novel approach in dealing with distance-induced mismatch by intelligently sensing instantaneous voice power variation and compensating model parameters. First, the distant-talking speech signal is processed through microphone array processing, and the corresponding distance information is extracted. Distance-sensitive Gaussian Mixture Models (GMMs), pre-trained to capture both speech power and room property are used to predict the optimal distance of the speech source. Consequently, pre-computed statistic priors corresponding to the optimal distance is selected to correct the statistics of the generic model which was frozen during training. Thus, model combinatorics are post-conditioned to match the power of instantaneous speech acoustics at runtime. This results to an improved likelihood in predicting the correct speech command at farther distances. We experiment using real data recorded inside two rooms. Experimental evaluation shows voice recognition performance using our method is more robust to the change in distance compared to the conventional approach. In our experiment, under the most acoustically challenging environment (i.e., Room 2: 2.5 meters), our method achieved 24.2% improvement in recognition performance against the best-performing conventional method.
Keywords: Human Machine Interaction, Human Computer Interaction, Voice Recognition, Acoustic Model Compensation, Acoustic Speech Enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188579 Ideal Disinfectant Characteristics According Data in Published Literature
Authors: Saimir Heta, Ilma Robo, Rialda Xhizdari, Kers Kapaj
Abstract:
The stability of an ideal disinfectant should be constant regardless of the change in the atmospheric conditions of the environment where it is kept. If the conditions such as temperature or humidity change, it is understood that it will also be necessary to approach possible changes in the holding materials such as plastic or glass bottles with the aim of protecting the disinfectant, for example, from the excessive lighting of the environment, which can also be translated as an increase in the temperature of disinfectant as a fluid. In this study, an attempt was made to find the most recent published data about the best possible combination of disinfectants indicated for use after dental procedures. This purpose of the study was realized by comparing the basic literature that is studied in the field of dentistry by students with the most published data in the literature of recent years about this topic. Each disinfectant is represented by a number called the disinfectant count, in which different factors can influence the increase or reduction of variables whose production remains a specific statistic for a specific disinfectant. The changes in the atmospheric conditions where the disinfectant is deposited and stored in the environment are known to affect the stability of the disinfectant as a fluid; this fact is known and even cited in the leaflets accompanying the manufactured boxes of disinfectants. It is these cares, in the form of advice, which are based not only on the preservation of the disinfectant but also on the application in order to have the desired clinical result. Aldehydes have the highest constant among the types of disinfectants, followed by acids. The lowest value of the constant belongs to the class of glycols, the predecessors of which were the halogens, in which class there are some representatives with disinfection applications. The class of phenols and acids have almost the same intervals of constants. If the goal were to find the ideal disinfectant among the large variety of disinfectants produced, a good starting point would be to find something unchanging or a fixed, unchanging element on the basis of which the comparison can be made properties of different disinfectants. Precisely based on the results of this study, the role of the specific constant according to the specific disinfectant is highlighted. Finding an ideal disinfectant, like finding a medication or the ideal antibiotic, is an ongoing but unattainable goal.
Keywords: Different disinfectants, phenols, aldehydes, specific constant, dental procedures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4278 Comparative Parametric Analysis on the Dynamic Response of Fibre Composite Beams with Debonding
Authors: Indunil Jayatilake, Warna Karunasena
Abstract:
Fiber Reinforced Polymer (FRP) composites enjoy an array of applications ranging from aerospace, marine and military to automobile, recreational and civil industry due to their outstanding properties. A structural glass fiber reinforced polymer (GFRP) composite sandwich panel made from E-glass fiber skin and a modified phenolic core has been manufactured in Australia for civil engineering applications. One of the major mechanisms of damage in FRP composites is skin-core debonding. The presence of debonding is of great concern not only because it severely affects the strength but also it modifies the dynamic characteristics of the structure, including natural frequency and vibration modes. This paper deals with the investigation of the dynamic characteristics of a GFRP beam with single and multiple debonding by finite element based numerical simulations and analyses using the STRAND7 finite element (FE) software package. Three-dimensional computer models have been developed and numerical simulations were done to assess the dynamic behavior. The FE model developed has been validated with published experimental, analytical and numerical results for fully bonded as well as debonded beams. A comparative analysis is carried out based on a comprehensive parametric investigation. It is observed that the reduction in natural frequency is more affected by single debonding than the equally sized multiple debonding regions located symmetrically to the single debonding position. Thus it is revealed that a large single debonding area leads to more damage in terms of natural frequency reduction than isolated small debonding zones of equivalent area, appearing in the GFRP beam. Furthermore, the extents of natural frequency shifts seem mode-dependent and do not seem to have a monotonous trend of increasing with the mode numbers.
Keywords: Debonding, dynamic response, finite element modelling, FRP beams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 52177 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.
Keywords: DTM, unmanned aerial vehicle, UAV, random, Kriging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81076 Contextual SenSe Model: Word Sense Disambiguation Using Sense and Sense Value of Context Surrounding the Target
Authors: Vishal Raj, Noorhan Abbas
Abstract:
Ambiguity in NLP (Natural Language Processing) refers to the ability of a word, phrase, sentence, or text to have multiple meanings. This results in various kinds of ambiguities such as lexical, syntactic, semantic, anaphoric and referential. This study is focused mainly on solving the issue of Lexical ambiguity. Word Sense Disambiguation (WSD) is an NLP technique that aims to resolve lexical ambiguity by determining the correct meaning of a word within a given context. Most WSD solutions rely on words for training and testing, but we have used lemma and Part of Speech (POS) tokens of words for training and testing. Lemma adds generality and POS adds properties of word into token. We have designed a method to create an affinity matrix to calculate the affinity between any pair of lemma_POS (a token where lemma and POS of word are joined by underscore) of given training set. Additionally, we have devised an algorithm to create the sense clusters of tokens using affinity matrix under hierarchy of POS of lemma. Furthermore, three different mechanisms to predict the sense of target word using the affinity/similarity value are devised. Each contextual token contributes to the sense of target word with some value and whichever sense gets higher value becomes the sense of target word. So, contextual tokens play a key role in creating sense clusters and predicting the sense of target word, hence, the model is named Contextual SenSe Model (CSM). CSM exhibits a noteworthy simplicity and explication lucidity in contrast to contemporary deep learning models characterized by intricacy, time-intensive processes, and challenging explication. CSM is trained on SemCor training data and evaluated on SemEval test dataset. The results indicate that despite the naivety of the method, it achieves promising results when compared to the Most Frequent Sense (MFS) model.
Keywords: Word Sense Disambiguation, WSD, Contextual SenSe Model, Most Frequent Sense, part of speech, POS, Natural Language Processing, NLP, OOV, out of vocabulary, ELMo, Embeddings from Language Model, BERT, Bidirectional Encoder Representations from Transformers, Word2Vec, lemma_POS, Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 38375 Using GIS and Map Data for the Analysis of the Relationship between Soil and Groundwater Quality at Saline Soil Area of Kham Sakaesaeng District, Nakhon Ratchasima, Thailand
Authors: W. Thongwat, B. Terakulsatit
Abstract:
The study area is Kham Sakaesaeng District in Nakhon Ratchasima Province, the south section of Northeastern Thailand, located in the Lower Khorat-Ubol Basin. This region is the one of saline soil area, located in a dry plateau and regularly experience standing with periods of floods and alternating with periods of drought. Especially, the drought in the summer season causes the major saline soil and saline water problems of this region. The general cause of dry land salting resulted from salting on irrigated land, and an excess of water leading to the rising water table in the aquifer. The purpose of this study is to determine the relationship of physical and chemical properties between the soil and groundwater. The soil and groundwater samples were collected in both rainy and summer seasons. The content of pH, electrical conductivity (EC), total dissolved solids (TDS), chloride and salinity were investigated. The experimental result of soil and groundwater samples show the slightly pH less than 7, EC (186 to 8,156 us/cm and 960 to 10,712 us/cm), TDS (93 to 3,940 ppm and 480 to 5,356 ppm), chloride content (45.58 to 4,177,015 mg/l and 227.90 to 9,216,736 mg/l), and salinity (0.07 to 4.82 ppt and 0.24 to 14.46 ppt) in the rainy and summer seasons, respectively. The distribution of chloride content and salinity content were interpolated and displayed as a map by using ArcMap 10.3 program, according to the season. The result of saline soil and brined groundwater in the study area were related to the low-lying topography, drought area, and salt-source exposure. Especially, the Rock Salt Member of Maha Sarakham Formation was exposed or lies near the ground surface in this study area. During the rainy season, salt was eroded or weathered from the salt-source rock formation and transported by surface flow or leached into the groundwater. In the dry season, the ground surface is dry enough resulting salt precipitates from the brined surface water or rises from the brined groundwater influencing the increasing content of chloride and salinity in the ground surface and groundwater.Keywords: Environmental geology, soil salinity, geochemistry, groundwater hydrology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 194474 Simulation of Concrete Wall Subjected to Airblast by Developing an Elastoplastic Spring Model in Modelica Modelling Language
Authors: Leo Laine, Morgan Johansson
Abstract:
To meet the civilizations future needs for safe living and low environmental footprint, the engineers designing the complex systems of tomorrow will need efficient ways to model and optimize these systems for their intended purpose. For example, a civil defence shelter and its subsystem components needs to withstand, e.g. airblast and ground shock from decided design level explosion which detonates with a certain distance from the structure. In addition, the complex civil defence shelter needs to have functioning air filter systems to protect from toxic gases and provide clean air, clean water, heat, and electricity needs to also be available through shock and vibration safe fixtures and connections. Similar complex building systems can be found in any concentrated living or office area. In this paper, the authors use a multidomain modelling language called Modelica to model a concrete wall as a single degree of freedom (SDOF) system with elastoplastic properties with the implemented option of plastic hardening. The elastoplastic model was developed and implemented in the open source tool OpenModelica. The simulation model was tested on the case with a transient equivalent reflected pressure time history representing an airblast from 100 kg TNT detonating 15 meters from the wall. The concrete wall is approximately regarded as a concrete strip of 1.0 m width. This load represents a realistic threat on any building in a city like area. The OpenModelica model results were compared with an Excel implementation of a SDOF model with an elastic-plastic spring using simple fixed timestep central difference solver. The structural displacement results agreed very well with each other when it comes to plastic displacement magnitude, elastic oscillation displacement, and response times.
Keywords: Airblast from explosives, elastoplastic spring model, Modelica modelling language, SDOF, structural response of concrete structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90773 Development of a Robot Assisted Centrifugal Casting Machine for Manufacturing Multi-Layer Journal Bearing and High-Tech Machine Components
Authors: Mohammad Syed Ali Molla, Mohammed Azim, Mohammad Esharuzzaman
Abstract:
Centrifugal-casting machine is used in manufacturing special machine components like multi-layer journal bearing used in all internal combustion engine, steam, gas turbine and air craft turboengine where isotropic properties and high precisions are desired. Moreover, this machine can be used in manufacturing thin wall hightech machine components like cylinder liners and piston rings of IC engine and other machine parts like sleeves, and bushes. Heavy-duty machine component like railway wheel can also be prepared by centrifugal casting. A lot of technological developments are required in casting process for production of good casted machine body and machine parts. Usually defects like blowholes, surface roughness, chilled surface etc. are found in sand casted machine parts. But these can be removed by centrifugal casting machine using rotating metallic die. Moreover, die rotation, its temperature control, and good pouring practice can contribute to the quality of casting because of the fact that the soundness of a casting in large part depends upon how the metal enters into the mold or dies and solidifies. Poor pouring practice leads to variety of casting defects such as temperature loss, low quality casting, excessive turbulence, over pouring etc. Besides these, handling of molten metal is very unsecured and dangerous for the workers. In order to get rid of all these problems, the need of an automatic pouring device arises. In this research work, a robot assisted pouring device and a centrifugal casting machine are designed, developed constructed and tested experimentally which are found to work satisfactorily. The robot assisted pouring device is further modified and developed for using it in actual metal casting process. Lot of settings and tests are required to control the system and ultimately it can be used in automation of centrifugal casting machine to produce high-tech machine parts with desired precision.Keywords: Casting, cylinder liners, journal bearing, robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 216472 Vibroacoustic Modulation of Wideband Vibrations and Its Possible Application for Windmill Blade Diagnostics
Authors: Abdullah Alnutayfat, Alexander Sutin, Dong Liu
Abstract:
Wind turbine has become one of the most popular energy production methods. However, failure of blades and maintenance costs evolve into significant issues in the wind power industry, so it is essential to detect the initial blade defects to avoid the collapse of the blades and structure. This paper aims to apply modulation of high-frequency blade vibrations by low-frequency blade rotation, which is close to the known Vibro-Acoustic Modulation (VAM) method. The high-frequency wideband blade vibration is produced by the interaction of the surface blades with the environment air turbulence, and the low-frequency modulation is produced by alternating bending stress due to gravity. The low-frequency load of rotational wind turbine blades ranges between 0.2-0.4 Hz and can reach up to 2 Hz for strong wind. The main difference between this study and previous ones on VAM methods is the use of a wideband vibration signal from the blade's natural vibrations. Different features of the VAM are considered using a simple model of breathing crack. This model considers the simple mechanical oscillator, where the parameters of the oscillator are varied due to low-frequency blade rotation. During the blade's operation, the internal stress caused by the weight of the blade modifies the crack's elasticity and damping. The laboratory experiment using steel samples demonstrates the possibility of VAM using a probe wideband noise signal. A cycle load with a small amplitude was used as a pump wave to damage the tested sample, and a small transducer generated a wideband probe wave. The received signal demodulation was conducted using the Detecting of Envelope Modulation on Noise (DEMON) approach. In addition, the experimental results were compared with the modulation index (MI) technique regarding the harmonic pump wave. The wideband and traditional VAM methods demonstrated similar sensitivity for earlier detection of invisible cracks. Importantly, employing a wideband probe signal with the DEMON approach speeds up and simplifies testing since it eliminates the need to conduct tests repeatedly for various harmonic probe frequencies and to adjust the probe frequency.
Keywords: Damage detection, turbine blades, Vibro-Acoustic Structural Health Monitoring, SHM, Detecting of Envelope Modulation on Noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44971 The Computational Psycholinguistic Situational-Fuzzy Self-Controlled Brain and Mind System under Uncertainty
Authors: Ben Khayut, Lina Fabri, Maya Avikhana
Abstract:
The modern Artificial Narrow Intelligence (ANI) models cannot: a) independently, situationally, and continuously function without of human intelligence, used for retraining and reprogramming the ANI’s models, and b) think, understand, be conscious, and cognize under uncertainty and changing of the environmental objects. To eliminate these shortcomings and build a new generation of Artificial Intelligence systems, the paper proposes a Conception, Model, and Method of Computational Psycholinguistic Cognitive Situational-Fuzzy Self-Controlled Brain and Mind System (CPCSFSCBMSUU). This system uses a neural network as its computational memory, and activates functions of the perception, identification of real objects, fuzzy situational control, and forming images of these objects. These images and objects are used for modeling their psychological, linguistic, cognitive, and neural values of properties and features, the meanings of which are identified, interpreted, generated, and formed taking into account the identified subject area, using the data, information, knowledge, accumulated in the Memory. The functioning of the CPCSFSCBMSUU is carried out by its subsystems of the: fuzzy situational control of all processes, computational perception, identifying of reactions and actions, Psycholinguistic Cognitive Fuzzy Logical Inference, Decision Making, Reasoning, Systems Thinking, Planning, Awareness, Consciousness, Cognition, Intuition, and Wisdom. In doing so are performed analysis and processing of the psycholinguistic, subject, visual, signal, sound and other objects, accumulation and using the data, information and knowledge of the Memory, communication, and interaction with other computing systems, robots and humans in order of solving the joint tasks. To investigate the functional processes of the proposed system, the principles of situational control, fuzzy logic, psycholinguistics, informatics, and modern possibilities of data science were applied. The proposed self-controlled system of brain and mind is oriented on use as a plug-in in multilingual subject applications.
Keywords: Computational psycholinguistic cognitive brain and mind system, situational fuzzy control, uncertainty, AI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40970 Urban Accessibility of Historical Cities: The Venetian Case Study
Authors: Valeria Tatano, Francesca Guidolin, Francesca Peltrera
Abstract:
The preservation of historical Italian heritage, at the urban and architectural scale, has to consider restrictions and requirements connected with conservation issues and usability needs, which are often at odds with historical heritage preservation. Recent decades have been marked by the search for increased accessibility not only of public and private buildings, but to the whole historical city, also for people with disability. Moreover, in the last years the concepts of Smart City and Healthy City seek to improve accessibility both in terms of mobility (independent or assisted) and fruition of goods and services, also for historical cities. The principles of Inclusive Design have introduced new criteria for the improvement of public urban space, between current regulations and best practices. Moreover, they have contributed to transforming “special needs” into an opportunity of social innovation. These considerations find a field of research and analysis in the historical city of Venice, which is at the same time a site of UNESCO world heritage, a mass tourism destination bringing in visitors from all over the world and a city inhabited by an aging population. Due to its conformation, Venetian urban fabric is only partially accessible: about four thousand bridges divide thousands of islands, making it almost impossible to move independently. These urban characteristics and difficulties were the base, in the last 20 years, for several researches, experimentations and solutions with the aim of eliminating architectural barriers, in particular for the usability of bridges. The Venetian Municipality with the EBA Office and some external consultants realized several devices (e.g. the “stepped ramp” and the new accessible ramps for the Venice Marathon) that should determine an innovation for the city, passing from the use of mechanical replicable devices to specific architectural projects in order to guarantee autonomy in use. This paper intends to present the state-of-the-art in bridges accessibility, through an analysis based on Inclusive Design principles and on the current national and regional regulation. The purpose is to evaluate some possible strategies that could improve performances, between limits and possibilities of interventions. The aim of the research is to lay the foundations for the development of a strategic program for the City of Venice that could successfully bring together both conservation and improvement requirements.
Keywords: Accessibility and inclusive design, historical heritage preservation, technological and social innovation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138269 Comparison of Methods for the Detection of Biofilm Formation in Yeast and Lactic Acid Bacteria Species Isolated from Dairy Products
Authors: Goksen Arik, Mihriban Korukluoglu
Abstract:
Lactic acid bacteria (LAB) and some yeast species are common microorganisms found in dairy products and most of them are responsible for the fermentation of foods. Such cultures are isolated and used as a starter culture in the food industry because of providing standardisation of the final product during the food processing. Choice of starter culture is the most important step for the production of fermented food. Isolated LAB and yeast cultures which have the ability to create a biofilm layer can be preferred as a starter in the food industry. The biofilm formation could be beneficial to extend the period of usage time of microorganisms as a starter. On the other hand, it is an undesirable property in pathogens, since biofilm structure allows a microorganism become more resistant to stress conditions such as antibiotic presence. It is thought that the resistance mechanism could be turned into an advantage by promoting the effective microorganisms which are used in the food industry as starter culture and also which have potential to stimulate the gastrointestinal system. Development of the biofilm layer is observed in some LAB and yeast strains. The resistance could make LAB and yeast strains dominant microflora in the human gastrointestinal system; thus, competition against pathogen microorganisms can be provided more easily. Based on this circumstance, in the study, 10 LAB and 10 yeast strains were isolated from various dairy products, such as cheese, yoghurt, kefir, and cream. Samples were obtained from farmer markets and bazaars in Bursa, Turkey. As a part of this research, all isolated strains were identified and their ability of biofilm formation was detected with two different methods and compared with each other. The first goal of this research was to determine whether isolates have the potential for biofilm production, and the second was to compare the validity of two different methods, which are known as “Tube method” and “96-well plate-based method”. This study may offer an insight into developing a point of view about biofilm formation and its beneficial properties in LAB and yeast cultures used as a starter in the food industry.
Keywords: Biofilm, dairy products, lactic acid bacteria, yeast.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 125168 Ultrasonic System for Diagnosis of Functional Gastrointestinal Disorders: Development, Verification and Clinical Trials
Authors: Eun-Geun Kim, Won-Pil Park, Dae-Gon Woo, Chang-Yong Ko, Yong-Heum Lee, Dohyung Lim, Tae-Min Shin, Han-Sung Kim, Gyoun-Jung Lee
Abstract:
Functional gastrointestinal disorders affect millions of people spread all age regardless of race and sex. There are, however, rare diagnostic methods for the functional gastrointestinal disorders because functional disorders show no evidence of organic and physical causes. Our research group identified recently that the gastrointestinal tract well in the patients with the functional gastrointestinal disorders becomes more rigid than healthy people when palpating the abdominal regions overlaying the gastrointestinal tract. Aim of this study is, therefore, to develop a diagnostic system for the functional gastrointestinal disorders based on ultrasound technique, which can quantify the characteristic above related to the rigidity of the gastrointestinal tract well. Ultrasound system was designed. The system consisted of transmitter, ultrasonic transducer, receiver, TGC, and CPLD, and verified via a phantom test. For the phantom test, ten soft-tissue specimens were harvested from porcine. Five of them were then treated chemically to mimic a rigid condition of gastrointestinal tract well, which was induced by functional gastrointestinal disorders. Additionally, the specimens were tested mechanically to identify if the mimic was reasonable. The customized ultrasound system was finally verified through application to human subjects with/without functional gastrointestinal disorders (Normal and Patient Groups). It was identified from the mechanical test that the chemically treated specimens were more rigid than normal specimen. This finding was favorably compared with the result obtained from the phantom test. The phantom test also showed that ultrasound system well described the specimen geometric characteristics and detected an alteration in the specimens. The maximum amplitude of the ultrasonic reflective signal in the rigid specimens (0.2±0.1Vp-p) at the interface between the fat and muscle layers was explicitly higher than that in the normal specimens (0.1±0.0Vp-p). Clinical tests using our customized ultrasound system for human subject showed that the maximum amplitudes of the ultrasonic reflective signals near to the gastrointestinal tract well for the patient group (2.6±0.3Vp-p) were generally higher than those in normal group (0.1±0.2Vp-p). Here, maximum reflective signals was appeared at 20mm depth approximately from abdominal skin for all human subjects, corresponding to the location of the boundary layer close to gastrointestinal tract well. These results suggest that newly designed diagnostic system based on ultrasound technique may diagnose enough the functional gastrointestinal disorders.Keywords: Functional Gastrointestinal Disorders, DiagnosticSystem, Phantom Test, Ultrasound System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 179067 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction
Authors: Talal Alsulaiman, Khaldoun Khashanah
Abstract:
In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent’s attributes. Also, the influence of social networks in the developing of agents interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.Keywords: Artificial stock markets, agent based simulation, bounded rationality, behavioral finance, artificial neural network, interaction, scale-free networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 252866 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time
Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla
Abstract:
Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.Keywords: Fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84065 Introduction of an Approach of Complex Virtual Devices to Achieve Device Interoperability in Smart Building Systems
Authors: Thomas Meier
Abstract:
One of the major challenges for sustainable smart building systems is to support device interoperability, i.e. connecting sensor or actuator devices from different vendors, and present their functionality to the external applications. Furthermore, smart building systems are supposed to connect with devices that are not available yet, i.e. devices that become available on the market sometime later. It is of vital importance that a sustainable smart building platform provides an appropriate external interface that can be leveraged by external applications and smart services. An external platform interface must be stable and independent of specific devices and should support flexible and scalable usage scenarios. A typical approach applied in smart home systems is based on a generic device interface used within the smart building platform. Device functions, even of rather complex devices, are mapped to that generic base type interface by means of specific device drivers. Our new approach, presented in this work, extends that approach by using the smart building system’s rule engine to create complex virtual devices that can represent the most diverse properties of real devices. We examined and evaluated both approaches by means of a practical case study using a smart building system that we have developed. We show that the solution we present allows the highest degree of flexibility without affecting external application interface stability and scalability. In contrast to other systems our approach supports complex virtual device configuration on application layer (e.g. by administration users) instead of device configuration at platform layer (e.g. platform operators). Based on our work, we can show that our approach supports almost arbitrarily flexible use case scenarios without affecting the external application interface stability. However, the cost of this approach is additional appropriate configuration overhead and additional resource consumption at the IoT platform level that must be considered by platform operators. We conclude that the concept of complex virtual devices presented in this work can be applied to improve the usability and device interoperability of sustainable intelligent building systems significantly.Keywords: Complex virtual devices, device integration, device interoperability, Internet of Things, smart building platform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 75764 Library Aware Power Conscious Realization of Complementary Boolean Functions
Authors: Padmanabhan Balasubramanian, C. Ardil
Abstract:
In this paper, we consider the problem of logic simplification for a special class of logic functions, namely complementary Boolean functions (CBF), targeting low power implementation using static CMOS logic style. The functions are uniquely characterized by the presence of terms, where for a canonical binary 2-tuple, D(mj) ∪ D(mk) = { } and therefore, we have | D(mj) ∪ D(mk) | = 0 [19]. Similarly, D(Mj) ∪ D(Mk) = { } and hence | D(Mj) ∪ D(Mk) | = 0. Here, 'mk' and 'Mk' represent a minterm and maxterm respectively. We compare the circuits minimized with our proposed method with those corresponding to factored Reed-Muller (f-RM) form, factored Pseudo Kronecker Reed-Muller (f-PKRM) form, and factored Generalized Reed-Muller (f-GRM) form. We have opted for algebraic factorization of the Reed-Muller (RM) form and its different variants, using the factorization rules of [1], as it is simple and requires much less CPU execution time compared to Boolean factorization operations. This technique has enabled us to greatly reduce the literal count as well as the gate count needed for such RM realizations, which are generally prone to consuming more cells and subsequently more power consumption. However, this leads to a drawback in terms of the design-for-test attribute associated with the various RM forms. Though we still preserve the definition of those forms viz. realizing such functionality with only select types of logic gates (AND gate and XOR gate), the structural integrity of the logic levels is not preserved. This would consequently alter the testability properties of such circuits i.e. it may increase/decrease/maintain the same number of test input vectors needed for their exhaustive testability, subsequently affecting their generalized test vector computation. We do not consider the issue of design-for-testability here, but, instead focus on the power consumption of the final logic implementation, after realization with a conventional CMOS process technology (0.35 micron TSMC process). The quality of the resulting circuits evaluated on the basis of an established cost metric viz., power consumption, demonstrate average savings by 26.79% for the samples considered in this work, besides reduction in number of gates and input literals by 39.66% and 12.98% respectively, in comparison with other factored RM forms.
Keywords: Reed-Muller forms, Logic function, Hammingdistance, Algebraic factorization, Low power design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181163 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep
Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk
Abstract:
The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.
Keywords: Autochthonous Miocene, Carpathian Foredeep, Poland, shale gas.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74862 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling
Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal
Abstract:
Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.
Keywords: Benchmark collection, program educational objectives, student outcomes, ABET, Accreditation, machine learning, supervised multiclass classification, text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 83761 Soil/Phytofisionomy Relationship in Southeast of Chapada Diamantina, Bahia, Brazil
Authors: Marcelo Araujo da Nóbrega, Ariel Moura Vilas Boas
Abstract:
This study aims to characterize the physicochemical aspects of the soils of southeastern Chapada Diamantina - Bahia related to the phytophysiognomies of this area, rupestrian field, small savanna (savanna fields), small dense savanna (savanna fields), savanna (Cerrado), dry thorny forest (Caatinga), dry thorny forest/savanna, scrub (Carrasco - ecotone), forest island (seasonal semi-deciduous forest - Capão) and seasonal semi-deciduous forest. To achieve the research objective, soil samples were collected in each plant formation and analyzed in the soil laboratory of ESALQ - USP in order to identify soil fertility through the determination of pH, organic matter, phosphorus, potassium, calcium, magnesium, potential acidity, sum of bases, cation exchange capacity and base saturation. The composition of soil particles was also checked; that is, the texture, step made in the terrestrial ecosystems laboratory of the Department of Ecology of USP and in the soil laboratory of ESALQ. Another important factor also studied was to show the variations in the vegetation cover in the region as a function of soil moisture in the different existing physiographic environments. Another study carried out was a comparison between the average soil moisture data with precipitation data from three locations with very different phytophysiognomies. The soils found in this part of Bahia can be classified into 5 classes, with a predominance of oxisols. All of these classes have a great diversity of physical and chemical properties, as can be seen in photographs and in particle size and fertility analyzes. The deepest soils are located in the Central Pediplano of Chapada Diamantina where the dirty field, the clean field, the executioner and the semideciduous seasonal forest (Capão) are located, and the shallower soils were found in the rupestrian field, dry thorny forest, and savanna fields, the latter located on a hillside. As for the variations in water in the region's soil, the data indicate that there were large spatial variations in humidity in both the rainy and dry periods.
Keywords: Bahia, Chapada diamantina, phytophysiognomies, soils.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 581