Search results for: time complexity measurements
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20974

Search results for: time complexity measurements

20704 Measurement of Innovation Performance

Authors: M. Chobotová, Ž. Rylková

Abstract:

Time full of changes which is associated with globalization, tougher competition, changes in the structures of markets and economic downturn, that all force companies to think about their competitive advantages. These changes can bring the company a competitive advantage and that can help improve competitive position in the market. Policy of the European Union is focused on the fast growing innovative companies which quickly respond to market demands and consequently increase its competitiveness. To meet those objectives companies need the right conditions and support of their state.

Keywords: innovation, performance, measurements metrics, indices

Procedia PDF Downloads 362
20703 Establishing Control Chart Limits for Rounded Measurements

Authors: Ran Etgar

Abstract:

The process of rounding off measurements in continuous variables is commonly encountered. Although it usually has minor effects, sometimes it can lead to poor outcomes in statistical process control using X̄ chart. The traditional control limits can cause incorrect conclusions if applied carelessly. This study looks into the limitations of classical control limits, particularly the impact of asymmetry. An approach to determining the distribution function of the measured parameter ȳ is presented, resulting in a more precise method to establish the upper and lower control limits. The proposed method, while slightly more complex than Shewhart's original idea, is still user-friendly and accurate and only requires the use of two straightforward tables.

Keywords: SPC, round-off data, control limit, rounding error

Procedia PDF Downloads 62
20702 Prototype of Over Dimension Over Loading (ODOL) Freight Transportation Monitoring System Based on Arduino Mega 'Sabrang': A Case Study in Klaten, Indonesia

Authors: Chairul Fajar, Muhammad Nur Hidayat, Muksalmina

Abstract:

The issue of Over Dimension Over Loading (ODOL) in Indonesia remains a significant challenge, causing traffic accidents, disrupting traffic flow, accelerating road damage, and potentially leading to bridge collapses. Klaten Regency, located on the slopes of Mount Merapi along the Woro River in Kemalang District, has potential Class C excavation materials such as sand and stone. Data from the Klaten Regency Transportation Department indicates that ODOL violations account for 72%, while non-violating vehicles make up only 28%. ODOL involves modifying factory-standard vehicles beyond the limits specified in the Type Test Registration Certificate (SRUT) to save costs and travel time. This study aims to develop a prototype ‘Sabrang’ monitoring system based on Arduino Mega to control and monitor ODOL freight transportation in the mining of Class C excavation materials in Klaten Regency. The prototype is designed to automatically measure the dimensions and weight of objects using a microcontroller. The data analysis techniques used in this study include the Normality Test and Paired T-Test, comparing sensor measurement results on scaled objects. The study results indicate differences in measurement validation under room temperature and ambient temperature conditions. Measurements at room temperature showed that the majority of H0 was accepted, meaning there was no significant difference in measurements when the prototype tool was used. Conversely, measurements at ambient temperature showed that the majority of H0 was rejected, indicating a significant difference in measurements when the prototype tool was used. In conclusion, the ‘Sabrang’ monitoring system prototype is effective for controlling ODOL, although measurement results are influenced by temperature conditions. This study is expected to assist in the monitoring and control of ODOL, thereby enhancing traffic safety and road infrastructure.

Keywords: over dimension over loading, prototype, microcontroller, Arduino, normality test, paired t-test

Procedia PDF Downloads 13
20701 Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field

Authors: Ismaila Olabisi Ogundiji, Hakeem Mayowa Olujide, Qasim Usamot

Abstract:

The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.

Keywords: soil electrical conductivity, adaptive sampling, conditional simulation, spatial uncertainty, site-specific management

Procedia PDF Downloads 118
20700 Automatic Seizure Detection Using Weighted Permutation Entropy and Support Vector Machine

Authors: Noha Seddik, Sherine Youssef, Mohamed Kholeif

Abstract:

The automated epileptic seizure detection research field has emerged in the recent years; this involves analyzing the Electroencephalogram (EEG) signals instead of the traditional visual inspection performed by expert neurologists. In this study, a Support Vector Machine (SVM) that uses Weighted Permutation Entropy (WPE) as the input feature is proposed for classifying normal and seizure EEG records. WPE is a modified statistical parameter of the permutation entropy (PE) that measures the complexity and irregularity of a time series. It incorporates both the mapped ordinal pattern of the time series and the information contained in the amplitude of its sample points. The proposed system utilizes the fact that entropy based measures for the EEG segments during epileptic seizure are lower than in normal EEG.

Keywords: electroencephalogram (EEG), epileptic seizure detection, weighted permutation entropy (WPE), support vector machine (SVM)

Procedia PDF Downloads 356
20699 Optimization of Titanium Leaching Process Using Experimental Design

Authors: Arash Rafiei, Carroll Moore

Abstract:

Leaching process as the first stage of hydrometallurgy is a multidisciplinary system including material properties, chemistry, reactor design, mechanics and fluid dynamics. Therefore, doing leaching system optimization by pure scientific methods need lots of times and expenses. In this work, a mixture of two titanium ores and one titanium slag are used for extracting titanium for leaching stage of TiO2 pigment production procedure. Optimum titanium extraction can be obtained from following strategies: i) Maximizing titanium extraction without selective digestion; and ii) Optimizing selective titanium extraction by balancing between maximum titanium extraction and minimum impurity digestion. The main difference between two strategies is due to process optimization framework. For the first strategy, the most important stage of production process is concerned as the main stage and rest of stages would be adopted with respect to the main stage. The second strategy optimizes performance of more than one stage at once. The second strategy has more technical complexity compared to the first one but it brings more economical and technical advantages for the leaching system. Obviously, each strategy has its own optimum operational zone that is not as same as the other one and the best operational zone is chosen due to complexity, economical and practical aspects of the leaching system. Experimental design has been carried out by using Taguchi method. The most important advantages of this methodology are involving different technical aspects of leaching process; minimizing the number of needed experiments as well as time and expense; and concerning the role of parameter interactions due to principles of multifactor-at-time optimization. Leaching tests have been done at batch scale on lab with appropriate control on temperature. The leaching tank geometry has been concerned as an important factor to provide comparable agitation conditions. Data analysis has been done by using reactor design and mass balancing principles. Finally, optimum zone for operational parameters are determined for each leaching strategy and discussed due to their economical and practical aspects.

Keywords: titanium leaching, optimization, experimental design, performance analysis

Procedia PDF Downloads 359
20698 The Material-Process Perspective: Design and Engineering

Authors: Lars Andersen

Abstract:

The development of design and engineering in large construction projects are characterized by an increased degree of flattening out of formal structures, extended use of parallel and integrated processes (‘Integrated Concurrent Engineering’) and an increased number of expert disciplines. The integration process is based on ongoing collaborations, dialogues, intercommunication and comments on each other’s work (iterations). This process based on reciprocal communication between actors and disciplines triggers value creation. However, communication between equals is not in itself sufficient to create effective decision making. The complexity of the process and time pressure contribute to an increased risk of a deficit of decisions and loss of process control. The paper refers to a study that aims at developing a resilient decision-making system that does not come in conflict with communication processes based on equality between the disciplines in the process. The study includes the construction of a hospital, following the phases design, engineering and physical building. The Research method is a combination of formative process research, process tracking and phenomenological analyses. The study tracked challenges and problems in the building process to the projection substrates (drawing and models) and further to the organization of the engineering and design phase. A comparative analysis of traditional and new ways of organizing the projecting made it possible to uncover an implicit material order or structure in the process. This uncovering implied a development of a material process perspective. According to this perspective the complexity of the process is rooted in material-functional differentiation. This differentiation presupposes a structuring material (the skeleton of the building) that coordinates the other types of material. Each expert discipline´s competence is related to one or a set of materials. The architect, consulting engineer construction etc. have their competencies related to structuring material, and inherent in this; coordination competence. When dialogues between the disciplines concerning the coordination between them do not result in agreement, the disciplines with responsibility for the structuring material decide the interface issues. Based on these premises, this paper develops a self-organized expert-driven interdisciplinary decision-making system.

Keywords: collaboration, complexity, design, engineering, materiality

Procedia PDF Downloads 212
20697 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data

Authors: K. Sathishkumar, V. Thiagarasu

Abstract:

Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.

Keywords: microarray technology, gene expression data, clustering, gene Selection

Procedia PDF Downloads 312
20696 Empirical Acceleration Functions and Fuzzy Information

Authors: Muhammad Shafiq

Abstract:

In accelerated life testing approaches life time data is obtained under various conditions which are considered more severe than usual condition. Classical techniques are based on obtained precise measurements, and used to model variation among the observations. In fact, there are two types of uncertainty in data: variation among the observations and the fuzziness. Analysis techniques, which do not consider fuzziness and are only based on precise life time observations, lead to pseudo results. This study was aimed to examine the behavior of empirical acceleration functions using fuzzy lifetimes data. The results showed an increased fuzziness in the transformed life times as compare to the input data.

Keywords: acceleration function, accelerated life testing, fuzzy number, non-precise data

Procedia PDF Downloads 284
20695 Anthropometric Indices of Obesity and Coronary Artery Atherosclerosis: An Autopsy Study in South Indian population

Authors: Francis Nanda Prakash Monteiro, Shyna Quadras, Tanush Shetty

Abstract:

The association between human physique and morbidity and mortality resulting from coronary artery disease has been studied extensively over several decades. Multiple studies have also been done on the correlation between grade of atherosclerosis, coronary artery diseases and anthropometrical measurements. However, the number of autopsy-based studies drastically reduces this number. It has been suggested that while in living subjects, it would be expensive, difficult, and even harmful to subject them to imaging modalities like CT scans and procedures involving contrast media to study mild atherosclerosis, no such harm is encountered in study of autopsy cases. This autopsy-based study was aimed to correlate the anthropometric measurements and indices of obesity, such as waist circumference (WC), hip circumference (HC), body mass index (BMI) and waist hip ratio (WHR) with the degree of atherosclerosis in the right coronary artery (RCA), main branch of the left coronary artery (LCA) and the left anterior descending artery (LADA) in 95 South Indian origin victims of both the genders between the age of 18 years and 75 years. The grading of atherosclerosis was done according to criteria suggested by the American Heart Association. The study also analysed the correlation of the anthropometric measurements and indices of obesity with the number of coronaries affected with atherosclerosis in an individual. All the anthropometric measurements and the derived indices were found to be significantly correlated to each other in both the genders except for the age, which is found to have a significant correlation only with the WHR. In both the genders severe degree of atherosclerosis was commonly observed in LADA, followed by LCA and RCA. Grade of atherosclerosis in RCA is significantly related to the WHR in males. Grade of atherosclerosis in LCA and LADA is significantly related to the WHR in females. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in males. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in females. Anthropometric measurements/indices of obesity can be an effective means to identify high risk cases of atherosclerosis at an early stage that can be effective in reducing the associated cardiac morbidity and mortality. A person with anthropometric measurements suggestive of mild atherosclerosis can be advised to modify his lifestyle, along with decreasing his exposure to the other risk factors. Those with measurements suggestive of higher degree of atherosclerosis can be subjected to confirmatory procedures to start effective treatment.

Keywords: atherosclerosis, coronary artery disease, indices, obesity

Procedia PDF Downloads 56
20694 Rounded-off Measurements and Their Implication on Control Charts

Authors: Ran Etgar

Abstract:

The process of rounding off measurements in continuous variables is commonly encountered. Although it usually has minor effects, sometimes it can lead to poor outcomes in statistical process control using X ̅-chart. The traditional control limits can cause incorrect conclusions if applied carelessly. This study looks into the limitations of classical control limits, particularly the impact of asymmetry. An approach to determining the distribution function of the measured parameter (Y ̅) is presented, resulting in a more precise method to establish the upper and lower control limits. The proposed method, while slightly more complex than Shewhart's original idea, is still user-friendly and accurate and only requires the use of two straightforward tables.

Keywords: inaccurate measurement, SPC, statistical process control, rounded-off, control chart

Procedia PDF Downloads 10
20693 Length Dimension Correlates of Longitudinal Physical Conditioning on Indian Male Youth

Authors: Seema Sharma Kaushik, Dhananjoy Shaw

Abstract:

Various length dimensions of the body have been a variable of interest in the research areas of kinanthropometry. However the inclusion of length measurements in various studies remains restricted to reflect characteristics of a particular game/sport at a particular time. Hence, the present investigation was conducted to study various length dimensions correlates of a longitudinal physical conditioning program on Indian male youth. The study was conducted on 90 Indian male youth. The sample was equally divided into three groups namely, progressive load training (PLT), constant load training (CLT) and no load training (NL). The variables included sitting height, leg length, arm length and foot length. The study was conducted by adopting the multi group repeated measure design. Three different groups were measured four times after completion of each of the three meso-cycles of six-weeks duration each. The measurements were taken using the standard landmarks and procedures. Mean, standard deviation and analysis of co-variance were computed to analyze the data statistically. The post-hoc analysis was conducted for the significant F-ratios at 0.05 level. The study concluded that the followed longitudinal physical conditioning program had significant effect on various length dimensions of Indian male youth.

Keywords: Indian male youth, longitudinal, length dimensions, physical conditioning

Procedia PDF Downloads 140
20692 Analysis of Direct Current Motor in LabVIEW

Authors: E. Ramprasath, P. Manojkumar, P. Veena

Abstract:

DC motors have been widely used in the past centuries which are proudly known as the workhorse of industrial systems until the invention of the AC induction motors which makes a huge revolution in industries. Since then, the use of DC machines have been decreased due to enormous factors such as reliability, robustness and complexity but it lost its fame due to the losses. A new methodology is proposed to construct a DC motor through the simulation in LabVIEW to get an idea about its real time performances, if a change in parameter might have bigger improvement in losses and reliability.

Keywords: analysis, characteristics, direct current motor, LabVIEW software, simulation

Procedia PDF Downloads 536
20691 2D Numerical Modeling of Ultrasonic Measurements in Concrete: Wave Propagation in a Multiple-Scattering Medium

Authors: T. Yu, L. Audibert, J. F. Chaix, D. Komatitsch, V. Garnier, J. M. Henault

Abstract:

Linear Ultrasonic Techniques play a major role in Non-Destructive Evaluation (NDE) for civil engineering structures in concrete since they can meet operational requirements. Interpretation of ultrasonic measurements could be improved by a better understanding of ultrasonic wave propagation in a multiple scattering medium. This work aims to develop a 2D numerical model of ultrasonic wave propagation in a heterogeneous medium, like concrete, integrating the multiple scattering phenomena in SPECFEM software. The coherent field of multiple scattering is obtained by averaging numerical wave fields, and it is used to determine the effective phase velocity and attenuation corresponding to an equivalent homogeneous medium. First, this model is applied to one scattering element (a cylinder) in a homogenous medium in a linear-elastic system, and its validation is completed thanks to the comparison with analytical solution. Then, some cases of multiple scattering by a set of randomly located cylinders or polygons are simulated to perform parametric studies on the influence of frequency and scatterer size, concentration, and shape. Also, the effective properties are compared with the predictions of Waterman-Truell model to verify its validity. Finally, the mortar viscoelastic behavior is introduced in the simulation in order to considerer the dispersion and the attenuation due to porosity included in the cement paste. In the future, different steps will be developed: The comparisons with experimental results, the interpretation of NDE measurements, and the optimization of NDE parameters before an auscultation.

Keywords: attenuation, multiple-scattering medium, numerical modeling, phase velocity, ultrasonic measurements

Procedia PDF Downloads 261
20690 Further Analysis of Global Robust Stability of Neural Networks with Multiple Time Delays

Authors: Sabri Arik

Abstract:

In this paper, we study the global asymptotic robust stability of delayed neural networks with norm-bounded uncertainties. By employing the Lyapunov stability theory and Homeomorphic mapping theorem, we derive some new types of sufficient conditions ensuring the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slopebounded activation functions. An important aspect of our results is their low computational complexity as the reported results can be verified by checking some properties symmetric matrices associated with the uncertainty sets of network parameters. The obtained results are shown to be generalization of some of the previously published corresponding results. Some comparative numerical examples are also constructed to compare our results with some closely related existing literature results.

Keywords: neural networks, delayed systems, lyapunov functionals, stability analysis

Procedia PDF Downloads 513
20689 Observer-Based Control Design for Double Integrators Systems with Long Sampling Periods and Actuator Uncertainty

Authors: Tomas Menard

Abstract:

The design of control-law for engineering systems has been investigated for many decades. While many results are concerned with continuous systems with continuous output, nowadays, many controlled systems have to transmit their output measurements through network, hence making it discrete-time. But it is well known that the sampling of a system whose control-law is based on the continuous output may render the system unstable, especially when this sampling period is long compared to the system dynamics. The control design then has to be adapted in order to cope with this issue. In this paper, we consider systems which can be modeled as double integrator with uncertainty on the input since many mechanical systems can be put under such form. We present a control scheme based on an observer using only discrete time measurement and which provides continuous time estimation of the state, combined with a continuous control law, which stabilized a system with second-order dynamics even in the presence of uncertainty. It is further shown that arbitrarily long sampling periods can be dealt with properly setting the control scheme parameters.

Keywords: dynamical system, control law design, sampled output, observer design

Procedia PDF Downloads 176
20688 Measurement and Analysis of Human Hand Kinematics

Authors: Tamara Grujic, Mirjana Bonkovic

Abstract:

Measurements and quantitative analysis of kinematic parameters of human hand movements have an important role in different areas such as hand function rehabilitation, modeling of multi-digits robotic hands, and the development of machine-man interfaces. In this paper the assessment and evaluation of the reach-to-grasp movement by using computerized and robot-assisted method is described. Experiment involved the measurements of hand positions of seven healthy subjects during grasping three objects of different shapes and sizes. Results showed that three dominant phases of reach-to-grasp movements could be clearly identified.

Keywords: human hand, kinematics, measurement and analysis, reach-to-grasp movement

Procedia PDF Downloads 453
20687 Efficient Semi-Systolic Finite Field Multiplier Using Redundant Basis

Authors: Hyun-Ho Lee, Kee-Won Kim

Abstract:

The arithmetic operations over GF(2m) have been extensively used in error correcting codes and public-key cryptography schemes. Finite field arithmetic includes addition, multiplication, division and inversion operations. Addition is very simple and can be implemented with an extremely simple circuit. The other operations are much more complex. The multiplication is the most important for cryptosystems, such as the elliptic curve cryptosystem, since computing exponentiation, division, and computing multiplicative inverse can be performed by computing multiplication iteratively. In this paper, we present a parallel computation algorithm that operates Montgomery multiplication over finite field using redundant basis. Also, based on the multiplication algorithm, we present an efficient semi-systolic multiplier over finite field. The multiplier has less space and time complexities compared to related multipliers. As compared to the corresponding existing structures, the multiplier saves at least 5% area, 50% time, and 53% area-time (AT) complexity. Accordingly, it is well suited for VLSI implementation and can be easily applied as a basic component for computing complex operations over finite field, such as inversion and division operation.

Keywords: finite field, Montgomery multiplication, systolic array, cryptography

Procedia PDF Downloads 279
20686 Trade-Offs between Verb Frequency and Syntactic Complexity in Children with Developmental Language Disorder

Authors: Pui I. Chao, Shanju Lin

Abstract:

Purpose: Children with developmental language disorder (DLD) have persistent language difficulties and often face great challenges when demands are high. The aim of this study was to investigate whether verb frequency would trade-off with syntactic complexity when they talk. Method: Forty-five children with DLD, 45 chronological age matches with TD (AGE), and 45 MLU-matches with TD (MLU) who were Mandarin speakers were selected from the previous study. Language samples were collected under three contexts: conversation about children’s family and school, story retelling, and free play. MLU, verb density, utterance length difference, verb density difference, and average verb frequency were calculated and further analyzed by ANOVAs. Results: Children with DLD and their MLU matches produced shorter utterances and used fewer verbs in expressions than the AGE matches. Compared to their AGE matches, the DLD group used more verbs and verbs with lower frequency in shorter utterances but used fewer verbs and verbs with higher frequency in longer utterances. Conclusion: Mandarin-speaking children with DLD showed difficulties in verb usage and were more vulnerable to trade-offs than their age-matched peers in utterances with high demand. As a result, task demand should be taken into account as speech-language pathologists assess whether children with DLD have adequate abilities in verb usage.

Keywords: developmental language disorder, syntactic complexity, trade-offs, verb frequency

Procedia PDF Downloads 144
20685 Wear Progress and -Mechanisms in Torpedo Ladles in Steel Industry

Authors: Mattahias Maj, Fabio Tatzgern, Karl Adam, Damir Kahrimanovic, Markus Varga

Abstract:

Torpedo ladles are necessary transport carriages in steel production to move the molten crude iron from the blast furnace to the steel refining plant. This requires the ladles to be high temperature resistant and insulate well to preserve the temperature and hold the risk of solidification at bay. Therefore, the involved refractories lining the inside of the torpedo ladles are chosen mostly according to their thermal properties, although wear of the materials by the liquid iron is also of major importance. In this work, we combined investigations of the thermal behaviour with wear studies of the lining over the whole lifetime of a torpedo ladle. Additional numerical simulations enabled a detailed model of the mechanical loads and temperature propagation at the various stations (heating, filling, emptying, cooling). The core of the investigation were detailed 3D measurements of the ladle’s cavity and thereby quantitative information of the wear progress at different time intervals during the lifetime of the ladles. The measurements allowed for a separation of different wear zones according to severity, namely the “splash zone” where the melt directly hits the ladle, the “melt zone” where during transport always liquid melt is present, and the “slag zone”, where the slag floats on the melt causing the most severe wear loss. Numerical simulations of the filling process were taken to calculate stress levels and temperature gradients, which led to the different onset of wear on those zones. Thermal imaging and punctual temperature measurements allowed for a study of the thermal consequences entailed by the wear onset. Additional “classical” damage analysis of the worn refractories complete the investigation. Thereby the wear mechanisms leading to the substantial wear loss were disclosed.

Keywords: high temperature, tribology, liquid-solid interaction, refractories, thermography

Procedia PDF Downloads 210
20684 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications

Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso

Abstract:

The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.

Keywords: interferometry, MIMO RADAR, SAR, tomography

Procedia PDF Downloads 180
20683 3D Stereoscopic Measurements from AR Drone Squadron

Authors: R. Schurig, T. Désesquelles, A. Dumont, E. Lefranc, A. Lux

Abstract:

A cost-efficient alternative is proposed to the use of a single drone carrying multiple cameras in order to take stereoscopic images and videos during its flight. Such drone has to be particularly large enough to take off with its equipment, and stable enough in order to make valid measurements. Corresponding performance for a single aircraft usually comes with a large cost. Proposed solution consists in using multiple smaller and cheaper aircrafts carrying one camera each instead of a single expensive one. To give a proof of concept, AR drones, quad-rotor UAVs from Parrot Inc., are experimentally used.

Keywords: drone squadron, flight control, rotorcraft, Unmanned Aerial Vehicle (UAV), AR drone, stereoscopic vision

Procedia PDF Downloads 460
20682 Governance Networks of China’s Neighborhood Micro-Redevelopment: The Case of Haikou

Authors: Lin Zhang

Abstract:

Neighborhood redevelopment is vital to improve residents’ living environment, and there has been a national neighborhood micro-redevelopment initiative in China since 2020, which is largely different from the previous large-scale demolition and reconstruction projects. Yet, few studies systematically examine the new interactions of multiple actors in this initiative. China’s neighborhood (micro-) redevelopment is a kind of governance network, and the complexity perspective could reflect the dynamic nature of multiple actors and their relationships in governance networks. In order to better understand the fundamental shifts of governance networks in China’s neighborhood micro-redevelopment, this paper adopted a theoretical framework of complexity in governance networks and analyzed the new governance networks of neighborhood micro-redevelopment projects in Haikou accordingly.

Keywords: neighborhood redevelopment, governance, networks, Haikou

Procedia PDF Downloads 75
20681 A Decision Support System for the Detection of Illicit Substance Production Sites

Authors: Krystian Chachula, Robert Nowak

Abstract:

Manufacturing home-made explosives and synthetic drugs is an increasing problem in Europe. To combat that, a data fusion system is proposed for the detection and localization of production sites in urban environments. The data consists of measurements of properties of wastewater performed by various sensors installed in a sewage network. A four-stage fusion strategy allows detecting sources of waste products from known chemical reactions. First, suspicious measurements are used to compute the amount and position of discharged compounds. Then, this information is propagated through the sewage network to account for missing sensors. The next step is clustering and the formation of tracks. Eventually, tracks are used to reconstruct discharge events. Sensor measurements are simulated by a subsystem based on real-world data. In this paper, different discharge scenarios are considered to show how the parameters of used algorithms affect the effectiveness of the proposed system. This research is a part of the SYSTEM project (SYnergy of integrated Sensors and Technologies for urban sEcured environMent).

Keywords: continuous monitoring, information fusion and sensors, internet of things, multisensor fusion

Procedia PDF Downloads 106
20680 Mapping Thermal Properties Using Resistivity, Lithology and Thermal Conductivity Measurements

Authors: Riccardo Pasquali, Keith Harlin, Mark Muller

Abstract:

The ShallowTherm project is focussed on developing and applying a methodology for extrapolating relatively sparsely sampled thermal conductivity measurements across Ireland using mapped Litho-Electrical (LE) units. The primary data used consist of electrical resistivities derived from the Geological Survey Ireland Tellus airborne electromagnetic dataset, GIS-based maps of Irish geology, and rock thermal conductivities derived from both the current Irish Ground Thermal Properties (IGTP) database and a new programme of sampling and laboratory measurement. The workflow has been developed across three case-study areas that sample a range of different calcareous, arenaceous, argillaceous, and volcanic lithologies. Statistical analysis of resistivity data from individual geological formations has been assessed and integrated with detailed lithological descriptions to define distinct LE units. Thermal conductivity measurements from core and hand samples have been acquired for every geological formation within each study area. The variability and consistency of thermal conductivity measurements within each LE unit is examined with the aim of defining a characteristic thermal conductivity (or range of thermal conductivities) for each LE unit. Mapping of LE units, coupled with characteristic thermal conductivities, provides a method of defining thermal conductivity properties at a regional scale and facilitating the design of ground source heat pump closed-loop collectors.

Keywords: thermal conductivity, ground source heat pumps, resistivity, heat exchange, shallow geothermal, Ireland

Procedia PDF Downloads 165
20679 The Predicted Values of the California Bearing Ratio (CBR) by Using the Measurements of the Soil Resistivity Method (DC)

Authors: Fathi Ali Swaid

Abstract:

The CBR test is widely used in the assessment of granular materials in base, subbase and subgrade layers of road and airfield pavements. Despite the success of this method, but it depends on a limited numbers of soil samples. This limitation do not adequately account for the spatial variability of soil properties. Thus, assessment is derived using these cursory soil data are likely to contain errors and thus make interpretation and soil characterization difficult. On the other hand quantitative methods of soil inventory at the field scale involve the design and adoption of sampling regimes and laboratory analysis that are time consuming and costly. In the latter case new technologies are required to efficiently sample and observe the soil in the field. This is particularly the case where soil bearing capacity is prevalent, and detailed quantitative information for determining its cause is required. In this paper, an electrical resistivity method DC is described and its application in Elg'deem Dirt road, located in Gasser Ahmad - Misurata, Libya. Results from the DC instrument were found to be correlated with the CBR values (r2 = 0.89). Finally, it is noticed that, the correlation can be used with experience for determining CBR value using basic soil electrical resistivity measurements and checked by few CBR test representing a similar range of CBR.

Keywords: California bearing ratio, basic soil electrical resistivity, CBR, soil, subgrade, new technologies

Procedia PDF Downloads 438
20678 Effect of Pre-Aging and Aging Parameters on Mechanical Behavior of Be-Treated 7075 Aluminum Alloys: Experimental Correlation using Minitab Software

Authors: M. Tash, S. Alkahtani

Abstract:

The present study was undertaken to investigate the effect of pre-aging and aging parameters (time and temperature) on the mechanical properties of Al-Mg-Zn (7075) alloys. Ultimate tensile strength, 0.5% offset yield strength and % elongation measurements were carried out on specimens prepared from cast and heat treated 7075 alloys. Duplex aging treatments were carried out for the as solution treated (SHT) specimens (pre-aged at different time and temperature followed by high temperature aging). A statistical design of experiments (DOE) approach using fractional factorial design was applied to determine the influence of controlling variables of pre-aging and aging treatment parameters and any interactions between them on the mechanical properties of 7075 alloys. A mathematical models are developed to relate the alloy ultimate tensile strength, yield strength and % elongation with the different pre-aging and aging parameters i.e. Pre-aging Temperature (PA T0C), Pre-aging time (PA t h), Aging temperature (AT0C), Aging time (At h), to acquire an understanding of the effects of these variables and their interactions on the mechanical properties of be-treated 7075 alloys.

Keywords: aging heat Treatment, tensile properties, be-treated cast Al-Mg-Zn (7075) alloys, experimental correlation

Procedia PDF Downloads 259
20677 Polydimethylsiloxane Applications in Interferometric Optical Fiber Sensors

Authors: Zeenat Parveen, Ashiq Hussain

Abstract:

This review paper consists of applications of PDMS (polydimethylsiloxane) materials for enhanced performance, optical fiber sensors in acousto-ultrasonic, mechanical measurements, current applications, sensing, measurements and interferometric optical fiber sensors. We will discuss the basic working principle of fiber optic sensing technology, various types of fiber optic and the PDMS as a coating material to increase the performance. Optical fiber sensing methods for detecting dynamic strain signals, including general sound and acoustic signals, high frequency signals i.e. ultrasonic/ultrasound, and other signals such as acoustic emission and impact induced dynamic strain. Optical fiber sensors have Industrial and civil engineering applications in mechanical measurements. Sometimes it requires different configurations and parameters of sensors. Optical fiber current sensors are based on Faraday Effect due to which we obtain better performance as compared to the conventional current transformer. Recent advancement and cost reduction has simulated interest in optical fiber sensing. Optical techniques are also implemented in material measurement. Fiber optic interferometers are used to sense various physical parameters including temperature, pressure and refractive index. There are four types of interferometers i.e. Fabry–perot, Mach-Zehnder, Michelson, and Sagnac. This paper also describes the future work of fiber optic sensors.

Keywords: fiber optic sensing, PDMS materials, acoustic, ultrasound, current sensor, mechanical measurements

Procedia PDF Downloads 378
20676 Non-Invasive Viscosity Determination of Liquid Organic Hydrogen Carriers by Alteration of Temperature and Flow Velocity Using Cavity Based Permittivity Measurement

Authors: I. Wiemann, N. Weiß, E. Schlücker, M. Wensing, A. Kölpin

Abstract:

Chemical storage of hydrogen by liquid organic hydrogen carriers (LOHC) is a very promising alternative to compression or cryogenics. These carriers have high energy density and allow at the same time efficient and safe storage of hydrogen under ambient conditions and without leakage losses. Another benefit of LOHC is the possibility to transport it using already available infrastructure for transport of fossil fuels. Efficient use of LOHC is related to a precise process control, which requires a number of sensors in order to measure all relevant process parameters, for example, to measure the level of hydrogen loading of the carrier. The degree of loading is relevant for the energy content of the storage carrier and represents simultaneously the modification in chemical structure of the carrier molecules. This variation can be detected in different physical properties like viscosity, permittivity or density. Thereby, each degree of loading corresponds to different viscosity values. Conventional measurements currently use invasive viscosity measurements or near-line measurements to obtain quantitative information. Avoiding invasive measurements has several severe advantages. Efforts are currently taken to provide a precise, non-invasive measurement method with equal or higher precision of the obtained results. This study investigates a method for determination of the viscosity of LOHC. Since the viscosity can retroactively derived from the degree of loading, permittivity is a target parameter as it is a suitable for determining the hydrogenation degree. This research analyses the influence of common physical properties on permittivity. The permittivity measurement system is based on a cavity resonator, an electromagnetic resonant structure, whose resonation frequency depends on its dimensions as well as the permittivity of the medium inside. For known resonator dimensions, the resonation frequency directly characterizes the permittivity. In order to determine the dependency of the permittivity on temperature and flow velocity, an experimental setup with heating device and flow test bench was designed. By varying temperature in the range of 293,15 K -393,15 K and flow velocity up to 140 mm/s, corresponding changes in the resonation frequency were measured in the hundredths of the GHz range.

Keywords: liquid organic hydrogen carriers, measurement, permittivity, viscosity., temperature, flow process

Procedia PDF Downloads 77
20675 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method

Authors: Rui Wu

Abstract:

In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.

Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning

Procedia PDF Downloads 91