Search results for: discrete sampling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3532

Search results for: discrete sampling

3262 Investigation of Single Particle Breakage inside an Impact Mill

Authors: E. Ghasemi Ardi, K. J. Dong, A. B. Yu, R. Y. Yang

Abstract:

In current work, a numerical model based on the discrete element method (DEM) was developed which provided information about particle dynamic and impact event condition inside a laboratory scale impact mill (Fritsch). It showed that each particle mostly experiences three impacts inside the mill. While the first impact frequently happens at front surface of the rotor’s rib, the frequent location of the second impact is side surfaces of the rotor’s rib. It was also showed that while the first impact happens at small impact angle mostly varying around 35º, the second impact happens at around 70º which is close to normal impact condition. Also analyzing impact energy revealed that varying mill speed from 6000 to 14000 rpm, the ratio of first impact’s average impact energy and minimum required energy to break particle (Wₘᵢₙ) increased from 0.30 to 0.85. Moreover, it was seen that second impact poses intense impact energy on particle which can be considered as the main cause of particle splitting. Finally, obtained information from DEM simulation along with obtained data from conducted experiments was implemented in semi-empirical equations in order to find selection and breakage functions. Then, using a back-calculation approach, those parameters were used to predict the PSDs of ground particles under different impact energies. Results were compared with experiment results and showed reasonable accuracy and prediction ability.

Keywords: single particle breakage, particle dynamic, population balance model, particle size distribution, discrete element method

Procedia PDF Downloads 271
3261 Multi-Layer Perceptron and Radial Basis Function Neural Network Models for Classification of Diabetic Retinopathy Disease Using Video-Oculography Signals

Authors: Ceren Kaya, Okan Erkaymaz, Orhan Ayar, Mahmut Özer

Abstract:

Diabetes Mellitus (Diabetes) is a disease based on insulin hormone disorders and causes high blood glucose. Clinical findings determine that diabetes can be diagnosed by electrophysiological signals obtained from the vital organs. 'Diabetic Retinopathy' is one of the most common eye diseases resulting on diabetes and it is the leading cause of vision loss due to structural alteration of the retinal layer vessels. In this study, features of horizontal and vertical Video-Oculography (VOG) signals have been used to classify non-proliferative and proliferative diabetic retinopathy disease. Twenty-five features are acquired by using discrete wavelet transform with VOG signals which are taken from 21 subjects. Two models, based on multi-layer perceptron and radial basis function, are recommended in the diagnosis of Diabetic Retinopathy. The proposed models also can detect level of the disease. We show comparative classification performance of the proposed models. Our results show that proposed the RBF model (100%) results in better classification performance than the MLP model (94%).

Keywords: diabetic retinopathy, discrete wavelet transform, multi-layer perceptron, radial basis function, video-oculography (VOG)

Procedia PDF Downloads 238
3260 Usage of Channel Coding Techniques for Peak-to-Average Power Ratio Reduction in Visible Light Communications Systems

Authors: P. L. D. N. M. de Silva, S. G. Edirisinghe, R. Weerasuriya

Abstract:

High peak-to-average power ratio (PAPR) is a concern of orthogonal frequency division multiplexing (OFDM) based visible light communication (VLC) systems. Discrete Fourier Transform spread (DFT-s) OFDM is an alternative single carrier modulation scheme which would address this concern. Employing channel coding techniques is another mechanism to reduce the PAPR. Previous research has been conducted to study the impact of these techniques separately. However, to the best of the knowledge of the authors, no study has been done so far to identify the improvement which can be harnessed by hybridizing these two techniques for VLC systems. Therefore, this is a novel study area under this research. In addition, channel coding techniques such as Polar codes and Turbo codes have been tested in the VLC domain. However, other efficient techniques such as Hamming coding and Convolutional coding have not been studied too. Therefore, the authors present the impact of the hybrid of DFT-s OFDM and Channel coding (Hamming coding and Convolutional coding) on PAPR in VLC systems using Matlab simulations.

Keywords: convolutional coding, discrete Fourier transform spread orthogonal frequency division multiplexing, hamming coding, peak-to-average power ratio, visible light communications

Procedia PDF Downloads 138
3259 Optical Signal-To-Noise Ratio Monitoring Based on Delay Tap Sampling Using Artificial Neural Network

Authors: Feng Wang, Shencheng Ni, Shuying Han, Shanhong You

Abstract:

With the development of optical communication, optical performance monitoring (OPM) has received more and more attentions. Since optical signal-to-noise ratio (OSNR) is directly related to bit error rate (BER), it is one of the important parameters in optical networks. Recently, artificial neural network (ANN) has been greatly developed. ANN has strong learning and generalization ability. In this paper, a method of OSNR monitoring based on delay-tap sampling (DTS) and ANN has been proposed. DTS technique is used to extract the eigenvalues of the signal. Then, the eigenvalues are input into the ANN to realize the OSNR monitoring. The experiments of 10 Gb/s non-return-to-zero (NRZ) on–off keying (OOK), 20 Gb/s pulse amplitude modulation (PAM4) and 20 Gb/s return-to-zero (RZ) differential phase-shift keying (DPSK) systems are demonstrated for the OSNR monitoring based on the proposed method. The experimental results show that the range of OSNR monitoring is from 15 to 30 dB and the root-mean-square errors (RMSEs) for 10 Gb/s NRZ-OOK, 20 Gb/s PAM4 and 20 Gb/s RZ-DPSK systems are 0.36 dB, 0.45 dB and 0.48 dB respectively. The impact of chromatic dispersion (CD) on the accuracy of OSNR monitoring is also investigated in the three experimental systems mentioned above.

Keywords: artificial neural network (ANN), chromatic dispersion (CD), delay-tap sampling (DTS), optical signal-to-noise ratio (OSNR)

Procedia PDF Downloads 91
3258 Signal Estimation and Closed Loop System Performance in Atrial Fibrillation Monitoring with Communication Channels

Authors: Mohammad Obeidat, Ayman Mansour

Abstract:

In this paper a unique issue rising from feedback control of Atrial Fibrillation monitoring system with embedded communication channels has been investigated. One of the important factors to measure the performance of the feedback control closed loop system is disturbance and noise attenuation factor. It is important that the feedback system can attenuate such disturbances on the atrial fibrillation heart rate signals. Communication channels depend on network traffic conditions and deliver different throughput, implying that the sampling intervals may change. Since signal estimation is updated on the arrival of new data, its dynamics actually change with the sampling interval. Consequently, interaction among sampling, signal estimation, and the controller will introduce new issues in remotely controlled Atrial Fibrillation system. This paper treats a remotely controlled atrial fibrillation system with one communication channel which connects between the heart rate and rhythm measurements to the remote controller. Typical and optimal signal estimation schemes is represented by a signal averaging filter with its time constant derived from the step size of the signal estimation algorithm.

Keywords: atrial fibrillation, communication channels, closed loop, estimation

Procedia PDF Downloads 364
3257 An Exploratory Study of Effects of Parenting Styles on Maternal Expectation and Perception of Compliance among Adolescents

Authors: Anton James

Abstract:

This study explored the contribution of parenting styles in the Maternal Perception of Compliance Model (MPCM). This model explores maternal expectations to illustrate the formation of maternal perception of severity of noncompliance in adolescent children. The methodology consisted of three stages: In the first stage, a focus group was held, and the data was analysed to fine-tune the interview schedule. In the second stage, a single interview was held, and the interview schedule was further modified. The third and the final stage consisted of interviewing six mothers who had adolescent children. They were chosen with ‘maximum variation’ approach to represent three tiered socioeconomic statuses, and Asian, white and black ethnicities. The data was thematically analysed in a hybrid fashion: inductive coding and deductive assignment of codes into discrete parenting styles. The study found: a) parenting styles are not always discrete and sometimes it can be mixed. b) The parenting styles are influenced by culture, socioeconomic status, transgenerational knowledge, academic knowledge, observational knowledge, self-reflective knowledge, and parental anxiety. c) The parenting style functioned a mediating mechanism where it attempted to converge discrepancies between parental expectations of compliance with maternal perception of severity of noncompliance. The findings of parenting styles were discussed in relation to MPCM.

Keywords: compliance, expectation, parenting styles, perception

Procedia PDF Downloads 763
3256 The Effect of Kangaroo Mother Care and Swaddling Method on Venipuncture Pain in Premature Infant: Randomized Clinical Trials

Authors: Faezeh Jahanpour, Shahin Dezhdar, Saeedeh Firouz Bakht, Afshin Ostovar

Abstract:

Objective: The hospitalized premature babies often undergo various painful procedures such as venous sampling. The Kangaroo mother care (KMC) method is one of the pain reduction methods, but as mother’s presence is not always possible, this research was done to compare the effect of swaddling and KMC method on venous sampling pain on premature neonates. Methods: In this randomized clinical trial 90 premature infants selected and randomly alocated into three groups; Group A (swaddling), Group B (the kangaroo care), and group C (the control). From 10 minutes before blood sampling to 2 minutes after that in group A, the infant was wrapped in a thin sheet, and in group B, the infant was under Kangaroo care. In all three groups, the heart rate and arterial oxygen saturation in time intervals of 30 seconds before, during, 30-60-90, and 120 seconds after sampling were measured and recorded. The infant’s face was video recorded since sampling till 2 minutes and the videos were checked by a researcher who was unaware of the kind of intervention and the pain assessment tools for infants (PIPP) for time intervals of 30 seconds were completed. Data analyzed by t-test, Q square, Repeated Measure ANOVA, Kruskal-Wallis, Post-hoc and Bonferroni test. Results: Findings revealed that the pain was reduced to a great extent in swaddling and kangaroo method compared to that in control group. But there was not a significant difference between kangaroo and swaddling care method (P ≥ 0.05). In addition, the findings showed that the heart rate and arterial oxygen saturation was low and stable in swaddling and Kangaroo care method and returned to base status faster, whereas, the changes were severe in control group and did not return to base status even after 120 seconds. Discussion: The results of this study showed that there was not a meaningful difference between swaddling and kangaroo care method on physiological indexes and pain in infants. Therefore, swaddling method can be a good substitute for kangaroo care method in this regard.

Keywords: Kangaroo mother care, neonate, pain, premature, swaddling, venipuncture,

Procedia PDF Downloads 196
3255 Bayesian Analysis of Change Point Problems Using Conditionally Specified Priors

Authors: Golnaz Shahtahmassebi, Jose Maria Sarabia

Abstract:

In this talk, we introduce a new class of conjugate prior distributions obtained from conditional specification methodology. We illustrate the application of such distribution in Bayesian change point detection in Poisson processes. We obtain the posterior distribution of model parameters using a general bivariate distribution with gamma conditionals. Simulation from the posterior is readily implemented using a Gibbs sampling algorithm. The Gibbs sampling is implemented even when using conditional densities that are incompatible or only compatible with an improper joint density. The application of such methods will be demonstrated using examples of simulated and real data.

Keywords: change point, bayesian inference, Gibbs sampler, conditional specification, gamma conditional distributions

Procedia PDF Downloads 171
3254 Simulation IDM for Schedule Generation of Slip-Form Operations

Authors: Hesham A. Khalek, Shafik S. Khoury, Remon F. Aziz, Mohamed A. Hakam

Abstract:

Slipforming operation’s linearity is a source of planning complications, and operation is usually subjected to bottlenecks at any point, so careful planning is required in order to achieve success. On the other hand, Discrete-event simulation concepts can be applied to simulate and analyze construction operations and to efficiently support construction scheduling. Nevertheless, preparation of input data for construction simulation is very challenging, time-consuming and human prone-error source. Therefore, to enhance the benefits of using DES in construction scheduling, this study proposes an integrated module to establish a framework for automating the generation of time schedules and decision support for Slipform construction projects, particularly through the project feasibility study phase by using data exchange between project data stored in an Intermediate database, DES and Scheduling software. Using the stored information, proposed system creates construction tasks attribute [e.g. activities durations, material quantities and resources amount], then DES uses all the given information to create a proposal for the construction schedule automatically. This research is considered a demonstration of a flexible Slipform project modeling, rapid scenario-based planning and schedule generation approach that may be of interest to both practitioners and researchers.

Keywords: discrete-event simulation, modeling, construction planning, data exchange, scheduling generation, EZstrobe

Procedia PDF Downloads 356
3253 Attention Problems among Adolescents: Examining Educational Environments

Authors: Zhidong Zhang, Zhi-Chao Zhang, Georgianna Duarte

Abstract:

This study investigated the attention problems with the instrument of Achenbach System of Empirically Based Assessment (ASEBA). Two thousand eight hundred and ninety-four adolescents were surveyed by using a stratified sampling method. We examined the relationships between relevant background variables and attention problems. Multiple regression models were applied to analyze the data. Relevant variables such as sports activities, hobbies, age, grade and the number of close friends were included in this study as predictive variables. The analysis results indicated that educational environments and extracurricular activities are important factors which influence students’ attention problems.

Keywords: adolescents, ASEBA, attention problems, educational environments, stratified sampling

Procedia PDF Downloads 255
3252 Post-Application Effects of Selected Management Strategies to the Citrus Nematode (Tylenchulus semipenetrans) Population Densities

Authors: Phatu William Mashela, Pontsho Edmund Tseke, Kgabo Martha Pofu

Abstract:

‘Inconsistent results’ in nematode suppression post-application of botanical-based products created credibility concerns. Relative to untreated control, sampling for nematodes post-application of botanical-based products suggested significant increases in nematode population densities. ‘Inconsistent results’ were confirmed in Tylenchulus semipenetrans on Citrus jambhiri seedlings when sampling was carried out at 120 days post-application of a granular Nemarioc-AG phytonematicide. The objective of this study was to determine post-application effects of untreated control, Nemarioc-AG phytonematicide and aldicarb to T. semipenetrans population densities on C. jambhiri seedlings. Two hundred and ten seedlings were each inoculated with 10000 T. semipenetrans eggs and second-stage juveniles (J2) in plastic pots containing 2700 ml growing mixture. A week after inoculation, seedlings were equally split and subjected to once-off treatment of 2 g aldicarb, 2 g Nemarioc-AG phytonematicide and untreated control. Five seedlings from each group were randomly placed on greenhouse benches to serve as a sampling block, with a total of 14 blocks. The entire block was sampled weekly and assessed for final nematode population density (Pf). After the final assessment, post-regression of untreated Pf to increasing sampling intervals exhibited positive quadratic relations, with the model explaining 90% associations, with optimum Pf of 13804 eggs and J2 at six weeks post-application. In contrast, treated Pf and increasing sampling interval exhibited negative quadratic relations, with the model explaining 95% and 92% associations in phytonematicide and aldicarb, respectively. In the phytonematicide, Pf was 974 eggs and J2, whereas that in aldicarb was 2205 eggs and J2 at six weeks. In conclusion, temporal cyclic nematode population growth provided an empirically-based explanation of ‘inconsistent results’ in nematode suppression post-application of the two nematode management strategies.

Keywords: nematode management, residual effect, slow decline of citrus, the citrus nematode

Procedia PDF Downloads 224
3251 An Investigation of Interdisciplinary Techniques for Assessment of Water Quality in an Industrial Area

Authors: Priti Saha, Biswajit Paul

Abstract:

Rapid urbanization and industrialization have increased the demand of groundwater. However, the present era has evident an enormous level of groundwater pollution. Therefore, water quality assessment is paramount importance to evaluate its suitability for drinking, irrigation and industrial use. This study focus to evaluate the groundwater quality of an industrial city in eastern India through interdisciplinary techniques. The multi-purpose Water Quality Index (WQI) assess the suitability for drinking as well as irrigation of forty sampling locations, where 2.5% and 15% of sampling locations have excellent water quality (WQI:0-25) as well as 15% and 40% have good quality (WQI:25-50), which represents its suitability for drinking and irrigation respectively. However, the industrial water quality was assessed through Ryznar Stability Index (LSI), which affirmed that only 2.5% of sampling locations have neither corrosive nor scale forming properties (RSI: 6.2-6.8). These techniques with the integration of geographical information system (GIS) for spatial assessment indorsed its effectiveness to identify the regions where the water bodies are suitable to use for drinking, irrigation as well as industrial activities. Further, the sources of these contaminants were identified through factor analysis (FA), which revealed that both the geogenic as well as anthropogenic sources were responsible for groundwater pollution. This research demonstrates the effectiveness of statistical and GIS techniques for the analysis of environmental contaminants.

Keywords: groundwater, water quality analysis, water quality index, WQI, factor analysis, FA, spatial assessment

Procedia PDF Downloads 175
3250 From Sampling to Sustainable Phosphate Recovery from Mine Waste Rock Piles

Authors: Hicham Amar, Mustapha El Ghorfi, Yassine Taha, Abdellatif Elghali, Rachid Hakkou, Mostafa Benzaazoua

Abstract:

Phosphate mine waste rock (PMWR) generated during ore extraction is continuously increasing, resulting in a significant environmental footprint. The main objectives of this study consist of i) elaboration of the sampling strategy of PMWR piles, ii) a mineralogical and chemical characterization of PMWR piles, and iii) 3D block model creation to evaluate the potential valorization of the existing PMWR. Destructive drilling using reverse circulation from 13 drills was used to collect samples for chemical (X-ray fluorescence analysis) and mineralogical assays. The 3D block model was created based on the data set, including chemical data of the realized drills using Datamine RM software. The optical microscopy observations showed that the sandy phosphate from drills in the PMWR piles is characterized by the abundance of carbonate fluorapatite with the presence of calcite, dolomite, and quartz. The mean grade of composite samples was around 19.5±2.7% for P₂O₅. The mean grade of P₂O₅ exhibited an increasing tendency by depth profile from bottom to top of PMWR piles. 3D block model generated with chemical data confirmed the tendency of the mean grades’ variation and may allow a potential selective extraction according to %P₂O₅. The 3D block model of P₂O₅ grade is an efficient sampling approach that confirmed the variation of P₂O₅ grade. This integrated approach for PMWR management will be a helpful tool for decision-making to recover the residual phosphate, adopting the circular economy and sustainability in the phosphate mining industry.

Keywords: 3D modelling, reverse circulation drilling, circular economy, phosphate mine waste rock, sampling

Procedia PDF Downloads 51
3249 Starting Order Eight Method Accurately for the Solution of First Order Initial Value Problems of Ordinary Differential Equations

Authors: James Adewale, Joshua Sunday

Abstract:

In this paper, we developed a linear multistep method, which is implemented in predictor corrector-method. The corrector is developed by method of collocation and interpretation of power series approximate solutions at some selected grid points, to give a continuous linear multistep method, which is evaluated at some selected grid points to give a discrete linear multistep method. The predictors were also developed by method of collocation and interpolation of power series approximate solution, to give a continuous linear multistep method. The continuous linear multistep method is then solved for the independent solution to give a continuous block formula, which is evaluated at some selected grid point to give discrete block method. Basic properties of the corrector were investigated and found to be zero stable, consistent and convergent. The efficiency of the method was tested on some linear, non-learn, oscillatory and stiff problems of first order, initial value problems of ordinary differential equations. The results were found to be better in terms of computer time and error bound when compared with the existing methods.

Keywords: predictor, corrector, collocation, interpolation, approximate solution, independent solution, zero stable, consistent, convergent

Procedia PDF Downloads 478
3248 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation

Authors: Hangsik Shin

Abstract:

The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.

Keywords: peak detection, photoplethysmography, sampling, signal reconstruction

Procedia PDF Downloads 345
3247 Experimental and Numerical Studies on Earthquake Shear Rupture Generation

Authors: Louis N. Y. Wong

Abstract:

En-echelon fractures are commonly found in rocks, which appear as a special set of regularly oriented and spaced fractures. By using both experimental and numerical approaches, this study investigates the interaction among them, and how this interaction finally contributes to the development of a shear rupture (fault), especially in brittle natural rocks. Firstly, uniaxial compression tests are conducted on marble specimens containing en-echelon flaws. The latter is cut by using the water abrasive jet into the rock specimens. The fracturing processes of these specimens leading to the formation of a fault are observed in detail by the use of a high speed camera. The influences of the flaw geometry on the production of tensile cracks and shear cracks, which in turn dictate the coalescence patterns of the entire set of en-echelon flaws are comprehensively studied. Secondly, a numerical study based on a recently developed contact model, flat-joint contact model using the discrete element method (DEM) is carried out to model the present laboratory experiments. The numerical results provide a quantitative assessment of the interaction of en-echelon flaws. Particularly, the evolution of the stress field, as well as the characteristics of new crack initiation, propagation and coalescence associated with the generation of an eventual shear rupture are studied in detail. The numerical results are found to agree well with the experimental results obtained in both microscopic and macroscopic observations.

Keywords: discrete element method, en-echelon flaws, fault, marble

Procedia PDF Downloads 239
3246 The Physicochemical Properties of Two Rivers in Eastern Cape South Africa as Relates to Vibrio Spp Density

Authors: Oluwatayo Abioye, Anthony Okoh

Abstract:

In the past view decades; human has experienced outbreaks of infections caused by pathogenic Vibrio spp which are commonly found in aquatic milieu. Asides the well-known Vibrio cholerae, discovery of other pathogens in this genus has been on the increase. While the dynamics of occurrence and distribution of Vibrio spp have been linked to some physicochemical parameters in salt water, data in relation to fresh water is limited. Hence, two rivers of importance in the Eastern Cape, South Africa were selected for this study. In all, eleven sampling sites were systematically identified and relevant physicochemical parameters, as well as Vibrio spp density, were determined for the period of six months using standard instruments and methods. Results were statistically analysed to determined key physicochemical parameters that determine the density of Vibrio spp in the selected rivers. Results: The density of Vibrio spp in all the sampling points ranges between < 1 CFU/mL to 174 x 10-2 CFU/mL. The physicochemical parameters of some of the sampling points were above the recommended standards. The regression analysis showed that Vibrio density in the selected rivers depends on a complex relationship between various physicochemical parameters. Conclusion: This study suggests that Vibrio spp density in fresh water does not depend on only temperature and salinity as suggested by earlier studies on salt water but rather on a complex relationship between several physicochemical parameters.

Keywords: vibrio density, physicochemical properties, pathogen, aquatic milieu

Procedia PDF Downloads 224
3245 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)

Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula

Abstract:

This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.

Keywords: MINLP, mixed-integer non-linear programming, optimization, structures

Procedia PDF Downloads 22
3244 Assessment of Air Quality Around Western Refinery in Libya: Mobile Monitoring

Authors: A. Elmethnani, A. Jroud

Abstract:

This coastal crude oil refinery is situated north of a big city west of Tripoli; the city then could be highly prone to downwind refinery emissions where the NNE wind direction is prevailing through most seasons of the year. Furthermore, due to the absence of an air quality monitoring network and scarce emission data available for the neighboring community, nearby residents have serious worries about the impacts of the oil refining operations on local air quality. In responding to these concerns, a short term survey has performed for three consecutive days where a semi-continues mobile monitoring approach has developed effectively in this study; the monitoring station (Compact AQM 65 AeroQual) was mounted on a vehicle to move quickly between locations, measurements of 10 minutes averaging of 60 seconds then been taken at each fixed sampling point. The downwind ambient concentration of CO, H₂S, NOₓ, NO₂, SO₂, PM₁, PM₂.₅ PM₁₀, and TSP were measured at carefully chosen sampling locations, ranging from 200m nearby the fence-line passing through the city center up to 4.7 km east to attain best spatial coverage. Results showed worrying levels of PM₂.₅ PM₁₀, and TSP at one sampling location in the city center, southeast of the refinery site, with an average mean of 16.395μg/m³, 33.021μg/m³, and 42.426μg/m³ respectively, which could be attributed to road traffic. No significant concentrations have been detected for other pollutants of interest over the study area, as levels observed for CO, SO₂, H₂S, NOₓ, and NO₂ haven’t respectively exceeded 1.707 ppm, 0.021ppm, 0.134 ppm, 0.4582 ppm, and 0.0018 ppm, which was at the same sampling locations as well. Although it wasn’t possible to compare the results with the Libyan air quality standards due to the difference in the averaging time period, the technique was adequate for the baseline air quality screening procedure. Overall, findings primarily suggest modeling of dispersion of the refinery emissions to assess the likely impact and spatial-temporal distribution of air pollutants.

Keywords: air quality, mobil monitoring, oil refinery

Procedia PDF Downloads 77
3243 Sampling Two-Channel Nonseparable Wavelets and Its Applications in Multispectral Image Fusion

Authors: Bin Liu, Weijie Liu, Bin Sun, Yihui Luo

Abstract:

In order to solve the problem of lower spatial resolution and block effect in the fusion method based on separable wavelet transform in the resulting fusion image, a new sampling mode based on multi-resolution analysis of two-channel non separable wavelet transform, whose dilation matrix is [1,1;1,-1], is presented and a multispectral image fusion method based on this kind of sampling mode is proposed. Filter banks related to this kind of wavelet are constructed, and multiresolution decomposition of the intensity of the MS and panchromatic image are performed in the sampled mode using the constructed filter bank. The low- and high-frequency coefficients are fused by different fusion rules. The experiment results show that this method has good visual effect. The fusion performance has been noted to outperform the IHS fusion method, as well as, the fusion methods based on DWT, IHS-DWT, IHS-Contourlet transform, and IHS-Curvelet transform in preserving both spectral quality and high spatial resolution information. Furthermore, when compared with the fusion method based on nonsubsampled two-channel non separable wavelet, the proposed method has been observed to have higher spatial resolution and good global spectral information.

Keywords: image fusion, two-channel sampled nonseparable wavelets, multispectral image, panchromatic image

Procedia PDF Downloads 417
3242 Diversity and Distribution of Benthic Invertebrates in the West Port, Malaysia

Authors: Seyedeh Belin Tavakoly Sany, Rosli Hashim, Majid Rezayi, Aishah Salleh, Omid Safari

Abstract:

The purpose of this paper is to describe the main characteristics of macroinvertebrate species in response to environmental forcing factors. Overall, 23 species of Mollusca, 4 species of Arthropods, 3 species of Echinodermata and 3 species of Annelida were identified at the 9 sampling stations during four sampling periods. Individual species of Mollusca constituted 36.4% of the total abundance, followed by Arthropods (27.01%), Annelida (34.3%) and Echinodermata (2.4%). The results of Kruskal-Wallis test indicated that a significant difference (p <0.05) in the abundance, richness and diversity of the macro-benthic community in different stations. The correlation analysis revealed that anthropogenic pollution and natural variability caused by these variations in spatial scales.

Keywords: benthic invertebrates, diversity, abundance, West Port

Procedia PDF Downloads 418
3241 Groundwater Quality Monitoring in the Shoush Suburbs, Khouzestan Province, Iran

Authors: Mohammad Tahsin Karimi Nezhad, Zaynab Shadbahr, Ali Gholami

Abstract:

In recent years many attempts have been made to assess groundwater contamination by nitrates worldwide. The assessment of spatial and temporal variations of physico-chemical parameters of water is necessary to mange water quality. The objectives of the study were to evaluate spatial variability and temporal changes of hydrochemical factors by water sampling from 24 wells in the Shoush City suburb. The analysis was conducted for the whole area and for different land use and geological classes. In addition, nitrate concentration variability with descriptive parameters such as sampling depth, dissolved oxygen, and on ground nitrogen loadings was also investigated The results showed that nitrate concentrations did not exceed the standard limit (50 mg/l). EC of water samples, ranged from 900 to 1200 µs/cm, TDS from 775 to 830 mg/l and pH from 5.6 to 9.

Keywords: groundwater, GIS, water quality, Iran

Procedia PDF Downloads 411
3240 Effect of Packing Ratio on Fire Spread across Discrete Fuel Beds: An Experimental Analysis

Authors: Qianqian He, Naian Liu, Xiaodong Xie, Linhe Zhang, Yang Zhang, Weidong Yan

Abstract:

In the wild, the vegetation layer with exceptionally complex fuel composition and heterogeneous spatial distribution strongly affects the rate of fire spread (ROS) and fire intensity. Clarifying the influence of fuel bed structure on fire spread behavior is of great significance to wildland fire management and prediction. The packing ratio is one of the key physical parameters describing the property of the fuel bed. There is a threshold value of the packing ratio for ROS, but little is known about the controlling mechanism. In this study, to address this deficiency, a series of fire spread experiments were performed across a discrete fuel bed composed of some regularly arranged laser-cut cardboards, with constant wind speed and different packing ratios (0.0125-0.0375). The experiment aims to explore the relative importance of the internal and surface heat transfer with packing ratio. The dependence of the measured ROS on the packing ratio was almost consistent with the previous researches. The data of the radiative and total heat fluxes show that the internal heat transfer and surface heat transfer are both enhanced with increasing packing ratio (referred to as ‘Stage 1’). The trend agrees well with the variation of the flame length. The results extracted from the video show that the flame length markedly increases with increasing packing ratio in Stage 1. Combustion intensity is suggested to be increased, which, in turn, enhances the heat radiation. The heat flux data shows that the surface heat transfer appears to be more important than the internal heat transfer (fuel preheating inside the fuel bed) in Stage 1. On the contrary, the internal heat transfer dominates the fuel preheating mechanism when the packing ratio further increases (referred to as ‘Stage 2’) because the surface heat flux keeps almost stable with the packing ratio in Stage 2. As for the heat convection, the flow velocity was measured using Pitot tubes both inside and on the upper surface of the fuel bed during the fire spread. Based on the gas velocity distribution ahead of the flame front, it is found that the airflow inside the fuel bed is restricted in Stage 2, which can reduce the internal heat convection in theory. However, the analysis indicates not the influence of inside flow on convection and combustion, but the decreased internal radiation of per unit fuel is responsible for the decrease of ROS.

Keywords: discrete fuel bed, fire spread, packing ratio, wildfire

Procedia PDF Downloads 121
3239 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function

Procedia PDF Downloads 285
3238 Comprehensive Review of Adversarial Machine Learning in PDF Malware

Authors: Preston Nabors, Nasseh Tabrizi

Abstract:

Portable Document Format (PDF) files have gained significant popularity for sharing and distributing documents due to their universal compatibility. However, the widespread use of PDF files has made them attractive targets for cybercriminals, who exploit vulnerabilities to deliver malware and compromise the security of end-user systems. This paper reviews notable contributions in PDF malware detection, including static, dynamic, signature-based, and hybrid analysis. It presents a comprehensive examination of PDF malware detection techniques, focusing on the emerging threat of adversarial sampling and the need for robust defense mechanisms. The paper highlights the vulnerability of machine learning classifiers to evasion attacks. It explores adversarial sampling techniques in PDF malware detection to produce mimicry and reverse mimicry evasion attacks, which aim to bypass detection systems. Improvements for future research are identified, including accessible methods, applying adversarial sampling techniques to malicious payloads, evaluating other models, evaluating the importance of features to malware, implementing adversarial defense techniques, and conducting comprehensive examination across various scenarios. By addressing these opportunities, researchers can enhance PDF malware detection and develop more resilient defense mechanisms against adversarial attacks.

Keywords: adversarial attacks, adversarial defense, adversarial machine learning, intrusion detection, PDF malware, malware detection, malware detection evasion

Procedia PDF Downloads 21
3237 Studies on the Physico-Chemical Parameters of Jebba Lake, Niger State, Nigeria

Authors: M. B. Mshelia, J. K. Balogun, J. Auta, N. O. Bankole

Abstract:

Studies on some aspects of the physico-chemical parameters of Jebba Lake, Niger State, Nigeria was carried out from January to December, 2011. The aim was to investigate some of the physico-chemical parameters relevant to life and health of fish in the water body. Six (6) sampling sites were selected at random which covered Northern (Faku and Awuru), middle (Old Gbajibo and Shankade) and southern zones (New Gbajibo and Jebba dam} of Jebba Lake. Sampling was carried out for the period of 12 Months. The Physico-chemical parameters that were considered were water temperature, pH, dissolved oxygen, electrical conductivity, water transparency, phosphate and nitrate. They were all measured using standard methods. The results showed that water temperature values ranged between 26.06 ± 0.15a in Jebba lake site to 27.34 ± 0.12b in Shankade sampling site, depth varied from 8.08m to 31.64m, water current was between 20.10.62 cm/sec and 26.46 cm/sec, Secchi disc transparency ranged from0.46±0.01 m in New Gbajibo, while the highest mean value was 0.53 ± 0.04 m in Jebba dam., pH varied from 6.49 ± 0.01 and 7.59,5.35±0.03a mg/l in New Gbajibo and 6.75 ± 0.03 mg/l in Faku.The dissolved oxygen varied between 5.35±0.03a mg/l in New Gbajibo and 6.75 ± 0.03 mg/l in Faku.,The mean conductivity value was highest in Faku and Jebba with 128.8 ± 0.32 and 128.8 ± 0.42homs/cm) respectively, Alkalinity ranged 43.00±0.02 to33.30±0.32 mg/l., The nitrate-nitrogen range (2.37 ± 0.08 – 6.40 ± 0.50mg/l)., The mean values of phosphate-phosphorus (PO4-P) recorded varied between 0.18 ± 0.00 mg/l in Faku to 0.47 + 0.10 mg/l in Old Gbajibo.The highest mean value for total dissolved solids was 57.88 ± 0.28 mg/l in Shankade, while the lowest mean value of 39.17 ± 0.42 mg/l was recorded in Faku. Free CO2 ranged from 1.75 mg/l to 2.94 mg/l, Biochemical oxygen demand (BOD) was between 4.25 mg/l and 5.41 mg/l and nitrate-nitrogen concentration was between 2.37 mg/l and 6.40 mg/l. There were significant differences (P < 0.05) between these parameters in relation to stations. Generally, the physico-chemical characteristics of Lake Jebba were within the productive values for aquatic systems, and strongly indicate that the lake is unpolluted.

Keywords: Jebba Lake, water quality, secchi disc, DO meter, sampling sites, physico-chemical parameters

Procedia PDF Downloads 411
3236 An Application-Driven Procedure for Optimal Signal Digitization of Automotive-Grade Ultrasonic Sensors

Authors: Mohamed Shawki Elamir, Heinrich Gotzig, Raoul Zoellner, Patrick Maeder

Abstract:

In this work, a methodology is presented for identifying the optimal digitization parameters for the analog signal of ultrasonic sensors. These digitization parameters are the resolution of the analog to digital conversion and the sampling rate. This is accomplished through the derivation of characteristic curves based on Fano inequality and the calculation of the mutual information content over a given dataset. The mutual information is calculated between the examples in the dataset and the corresponding variation in the feature that needs to be estimated. The optimal parameters are identified in a manner that ensures optimal estimation performance while preventing inefficiency in using unnecessarily powerful analog to digital converters.

Keywords: analog to digital conversion, digitization, sampling rate, ultrasonic

Procedia PDF Downloads 184
3235 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 46
3234 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation

Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen

Abstract:

Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.

Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling

Procedia PDF Downloads 71
3233 A Two-Pronged Truncated Deferred Sampling Plan for Log-Logistic Distribution

Authors: Braimah Joseph Odunayo, Jiju Gillariose

Abstract:

This paper is aimed at developing a sampling plan that uses information from precedent and successive lots for lot disposition with a pretention that the life-time of a particular product assumes a Log-logistic distribution. A Two-pronged Truncated Deferred Sampling Plan (TTDSP) for Log-logistic distribution is proposed when the testing is truncated at a precise time. The best possible sample sizes are obtained under a given Maximum Allowable Percent Defective (MAPD), Test Suspension Ratios (TSR), and acceptance numbers (c). A formula for calculating the operating characteristics of the proposed plan is also developed. The operating characteristics and mean-ratio values were used to measure the performance of the plan. The findings of the study show that: Log-logistic distribution has a decreasing failure rate; furthermore, as mean-life ratio increase, the failure rate reduces; the sample size increase as the acceptance number, test suspension ratios and maximum allowable percent defective increases. The study concludes that the minimum sample sizes were smaller, which makes the plan a more economical plan to adopt when cost and time of production are costly and the experiment being destructive.

Keywords: consumers risk, mean life, minimum sample size, operating characteristics, producers risk

Procedia PDF Downloads 110