Search results for: experimental modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8672

Search results for: experimental modelling

3482 A Genetic Algorithm Based Ensemble Method with Pairwise Consensus Score on Malware Cacophonous Labels

Authors: Shih-Yu Wang, Shun-Wen Hsiao

Abstract:

In the field of cybersecurity, there exists many vendors giving malware samples classified results, namely naming after the label that contains some important information which is also called AV label. Lots of researchers relay on AV labels for research. Unfortunately, AV labels are too cluttered. They do not have a fixed format and fixed naming rules because the naming results were based on each classifiers' viewpoints. A way to fix the problem is taking a majority vote. However, voting can sometimes create problems of bias. Thus, we create a novel ensemble approach which does not rely on the cacophonous naming result but depend on group identification to aggregate everyone's opinion. To achieve this purpose, we develop an scoring system called Pairwise Consensus Score (PCS) to calculate result similarity. The entire method architecture combine Genetic Algorithm and PCS to find maximum consensus in the group. Experimental results revealed that our method outperformed the majority voting by 10% in term of the score.

Keywords: genetic algorithm, ensemble learning, malware family, malware labeling, AV labels

Procedia PDF Downloads 77
3481 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level

Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni

Abstract:

In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.

Keywords: tropocollagen, multiscale model, fibrils, knee ligaments

Procedia PDF Downloads 122
3480 High-Throughput, Purification-Free, Multiplexed Profiling of Circulating miRNA for Discovery, Validation, and Diagnostics

Authors: J. Hidalgo de Quintana, I. Stoner, M. Tackett, G. Doran, C. Rafferty, A. Windemuth, J. Tytell, D. Pregibon

Abstract:

We have developed the Multiplexed Circulating microRNA assay that allows the detection of up to 68 microRNA targets per sample. The assay combines particle­based multiplexing, using patented Firefly hydrogel particles, with single­ step RT-PCR signal. Thus, the Circulating microRNA assay leverages PCR sensitivity while eliminating the need for separate reverse transcription reactions and mitigating amplification biases introduced by target­-specific qPCR. Furthermore, the ability to multiplex targets in each well eliminates the need to split valuable samples into multiple reactions. Results from the Circulating microRNA assay are interpreted using Firefly Analysis Workbench, which allows visualization, normalization, and export of experimental data. To aid discovery and validation of biomarkers, we have generated fixed panels for Oncology, Cardiology, Neurology, Immunology, and Liver Toxicology. Here we present the data from several studies investigating circulating and tumor microRNA, showcasing the ability of the technology to sensitively and specifically detect microRNA biomarker signatures from fluid specimens.

Keywords: biomarkers, biofluids, miRNA, photolithography, flowcytometry

Procedia PDF Downloads 351
3479 Hydrogeophysical Investigations And Mapping of Ingress Channels Along The Blesbokspruit Stream In The East Rand Basin Of The Witwatersrand, South Africa

Authors: Melvin Sethobya, Sithule Xanga, Sechaba Lenong, Lunga Nolakana, Gbenga Adesola

Abstract:

Mining has been the cornerstone of the South African economy for the last century. Most of the gold mining in South Africa was conducted within the Witwatersrand basin, which contributed to the rapid growth of the city of Johannesburg and capitulated the city to becoming the business and wealth capital of the country. But with gradual depletion of resources, a stoppage in the extraction of underground water from mines and other factors relating to survival of the mining operations over a lengthy period, most of the mines were abandoned and left to pollute the local waterways and groundwater with toxins, heavy metal residue and increased acid mine drainage ensued. The Department of Mineral Resources and Energy commissioned a project whose aim is to monitor, maintain, and mitigate the adverse environmental impacts of polluted water mine water flowing into local streams affecting local ecosystems and livelihoods downstream. As part of mitigation efforts, the diagnosis and monitoring of groundwater or surface water polluted sites has become important. Geophysical surveys, in particular, Resistivity and Magnetics surveys, were selected as some of most suitable techniques for investigation of local ingress points along of one the major streams cutting through the Witwatersrand basin, namely the Blesbokspruit, which is found in the eastern part of the basin. The aim of the surveys was to provide information that could be used to assist in determining possible water loss/ ingress from the Blesbokspriut stream. Modelling of geophysical surveys results offered an in-depth insight into the interaction and pathways of polluted water through mapping of possible ingress channels near the Blesbokspruit. The resistivity - depth profile of the surveyed site exhibit a three(3) layered model with low resistivity values (10 to 200 Ω.m) overburden, which is underlain by a moderate resistivity weathered layer (>300 Ω.m), which sits on a more resistive crystalline bedrock (>500 Ω.m). Two locations of potential ingress channels were mapped across the two traverses at the site. The magnetic survey conducted at the site mapped a major NE-SW trending regional linearment with a strong magnetic signature, which was modeled to depth beyond 100m, with the potential to act as a conduit for dispersion of stream water away from the stream, as it shared a similar orientation with the potential ingress channels as mapped using the resistivity method.

Keywords: eletrictrical resistivity, magnetics survey, blesbokspruit, ingress

Procedia PDF Downloads 54
3478 Improvement of Oran Sebkha Soil by Dredged Sediments from Chorfa Dam in Algeria

Authors: Z. Aloui-Labiod, H. Trouzine, M. S. Ghembaza

Abstract:

Geotechnical properties of dredged sediment from Chorfa dam in Algeria and their mixtures (5%, 10%, 15%, 20%, and 25%)with bentonite were investigated through with bentonite were investigated through a series of laboratory experimental tests in order to investigate possibilities of their usage as a barrier against the spread out of the Sebkha of Oran in the northwest of Algeria. Grain size and Atterberg limits tests, chemical and mineral analyses, and compaction, vertical swelling, and horizontal and vertical permeability tests were performed on the soils and their mixtures using tap water and the salty Sebkha water. The results indicate that the bentonite specimens remolded and inundated with Sebkha salty water have less swell potential than those prepared with tap water. The addition of bentonite to Chorfa sediment increases the density, limit liquid, specific surface, and swell potential of the mixtures. Compaction tests show a decrease in the optimum moisture and an increase in maximum dry densities as the bentonite content increases. The horizontal and vertical permeabilities decrease relatively with the addition of bentonite.

Keywords: dredged sediment, bentonite, salty water, barrier

Procedia PDF Downloads 415
3477 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates

Authors: Jennifer Buz, Alvin Spivey

Abstract:

The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.

Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation

Procedia PDF Downloads 110
3476 Ground State Phases in Two-Mode Quantum Rabi Models

Authors: Suren Chilingaryan

Abstract:

We study two models describing a single two-level system coupled to two boson field modes in either a parallel or orthogonal setup. Both models may be feasible for experimental realization through Raman adiabatic driving in cavity QED. We study their ground state configurations; that is, we find the quantum precursors of the corresponding semi-classical phase transitions. We found that the ground state configurations of both models present the same critical coupling as the quantum Rabi model. Around this critical coupling, the ground state goes from the so-called normal configuration with no excitation, the qubit in the ground state and the fields in the quantum vacuum state, to a ground state with excitations, the qubit in a superposition of ground and excited state, while the fields are not in the vacuum anymore, for the first model. The second model shows a more complex ground state configuration landscape where we find the normal configuration mentioned above, two single-mode configurations, where just one of the fields and the qubit are excited, and a dual-mode configuration, where both fields and the qubit are excited.

Keywords: quantum optics, quantum phase transition, cavity QED, circuit QED

Procedia PDF Downloads 358
3475 An Image Segmentation Algorithm for Gradient Target Based on Mean-Shift and Dictionary Learning

Authors: Yanwen Li, Shuguo Xie

Abstract:

In electromagnetic imaging, because of the diffraction limited system, the pixel values could change slowly near the edge of the image targets and they also change with the location in the same target. Using traditional digital image segmentation methods to segment electromagnetic gradient images could result in lots of errors because of this change in pixel values. To address this issue, this paper proposes a novel image segmentation and extraction algorithm based on Mean-Shift and dictionary learning. Firstly, the preliminary segmentation results from adaptive bandwidth Mean-Shift algorithm are expanded, merged and extracted. Then the overlap rate of the extracted image block is detected before determining a segmentation region with a single complete target. Last, the gradient edge of the extracted targets is recovered and reconstructed by using a dictionary-learning algorithm, while the final segmentation results are obtained which are very close to the gradient target in the original image. Both the experimental results and the simulated results show that the segmentation results are very accurate. The Dice coefficients are improved by 70% to 80% compared with the Mean-Shift only method.

Keywords: gradient image, segmentation and extract, mean-shift algorithm, dictionary iearning

Procedia PDF Downloads 256
3474 Kinetic Modeling Study and Scale-Up of Niogas Generation Using Garden Grass and Cattle Dung as Feedstock

Authors: Tumisang Seodigeng, Hilary Rutto

Abstract:

In this study we investigate the use of a laboratory batch digester to derive kinetic parameters for anaerobic digestion of garden grass and cattle dung. Laboratory experimental data from a 5 liter batch digester operating at mesophilic temperature of 32 C is used to derive parameters for Michaelis-Menten kinetic model. These fitted kinetics are further used to predict the scale-up parameters of a batch digester using DynoChem modeling and scale-up software. The scale-up model results are compared with performance data from 20 liter, 50 liter, and 200 liter batch digesters. Michaelis-Menten kinetic model shows to be a very good and easy to use model for kinetic parameter fitting on DynoChem and can accurately predict scale-up performance of 20 liter and 50 liter batch reactor based on parameters fitted on a 5 liter batch reactor.

Keywords: Biogas, kinetics, DynoChem Scale-up, Michaelis-Menten

Procedia PDF Downloads 484
3473 An Improved Tracking Approach Using Particle Filter and Background Subtraction

Authors: Amir Mukhtar, Dr. Likun Xia

Abstract:

An improved, robust and efficient visual target tracking algorithm using particle filtering is proposed. Particle filtering has been proven very successful in estimating non-Gaussian and non-linear problems. In this paper, the particle filter is used with color feature to estimate the target state with time. Color distributions are applied as this feature is scale and rotational invariant, shows robustness to partial occlusion and computationally efficient. The performance is made more robust by choosing the different (YIQ) color scheme. Tracking is performed by comparison of chrominance histograms of target and candidate positions (particles). Color based particle filter tracking often leads to inaccurate results when light intensity changes during a video stream. Furthermore, background subtraction technique is used for size estimation of the target. The qualitative evaluation of proposed algorithm is performed on several real-world videos. The experimental results demonstrate that the improved algorithm can track the moving objects very well under illumination changes, occlusion and moving background.

Keywords: tracking, particle filter, histogram, corner points, occlusion, illumination

Procedia PDF Downloads 366
3472 Investigation of the Capability of REALP5 to Solve Complex Fuel Geometry

Authors: D. Abdelrazek, M. NaguibAly, A. A. Badawi, Asmaa G. Abo Elnour, A. A. El-Kafas

Abstract:

This work is developed within IAEA Coordinated Research Program 1496, “Innovative methods in research reactor analysis: Benchmark against experimental data on neutronics and thermal-hydraulic computational methods and tools for operation and safety analysis of research reactors.” The study investigates the capability of Code RELAP5/Mod3.4 to solve complex geometry complexity. Its results are compared to the results of PARET, a common code in thermal hydraulic analysis for research reactors, belonging to MTR-PC groups. The WWR-SM reactor at the Institute of Nuclear Physics (INP) in the Republic of Uzbekistan is simulated using both PARET and RELAP5 at steady state. Results from the two codes are compared. REALP5 code succeeded in solving the complex fuel geometry. The PARET code needed some calculations to obtain the final result. Although the final results from the PARET are more accurate, the small differences in both results makes using RELAP5 code recommended in case of complex fuel assemblies.

Keywords: complex fuel geometry, PARET, RELAP5, WWR-SM reactor

Procedia PDF Downloads 323
3471 Comparative Isotherms Studies on Adsorptive Removal of Methyl Orange from Wastewater by Watermelon Rinds and Neem-Tree Leaves

Authors: Sadiq Sani, Muhammad B. Ibrahim

Abstract:

Watermelon rinds powder (WRP) and neem-tree leaves powder (NLP) were used as adsorbents for equilibrium adsorption isotherms studies for detoxification of methyl orange dye (MO) from simulated wastewater. The applicability of the process to various isotherm models was tested. All isotherms from the experimental data showed excellent linear reliability (R2: 0.9487-0.9992) but adsorptions onto WRP were more reliable (R2: 0.9724-0.9992) than onto NLP (R2: 0.9487-0.9989) except for Temkin’s Isotherm where reliability was better onto NLP (R2: 0.9937) than onto WRP (R2: 0.9935). Dubinin-Radushkevich’s monolayer adsorption capacities for both WRP and NLP (qD: 20.72 mg/g, 23.09 mg/g) were better than Langmuir’s (qm: 18.62 mg/g, 21.23 mg/g) with both capacities higher for adsorption onto NLP (qD: 23.09 mg/g; qm: 21.23 mg/g) than onto WRP (qD: 20.72 mg/g; qm: 18.62 mg/g). While values for Langmuir’s separation factor (RL) for both adsorbents suggested unfavourable adsorption processes (RL: -0.0461, -0.0250), Freundlich constant (nF) indicated favourable process onto both WRP (nF: 3.78) and NLP (nF: 5.47). Adsorption onto NLP had higher Dubinin-Radushkevich’s mean free energy of adsorption (E: 0.13 kJ/mol) than WRP (E: 0.08 kJ/mol) and Temkin’s heat of adsorption (bT) was better onto NLP (bT: -0.54 kJ/mol) than onto WRP (bT: -0.95 kJ/mol) all of which suggested physical adsorption.

Keywords: adsorption isotherms, methyl orange, neem leaves, watermelon rinds

Procedia PDF Downloads 259
3470 The Convolution Recurrent Network of Using Residual LSTM to Process the Output of the Downsampling for Monaural Speech Enhancement

Authors: Shibo Wei, Ting Jiang

Abstract:

Convolutional-recurrent neural networks (CRN) have achieved much success recently in the speech enhancement field. The common processing method is to use the convolution layer to compress the feature space by multiple upsampling and then model the compressed features with the LSTM layer. At last, the enhanced speech is obtained by deconvolution operation to integrate the global information of the speech sequence. However, the feature space compression process may cause the loss of information, so we propose to model the upsampling result of each step with the residual LSTM layer, then join it with the output of the deconvolution layer and input them to the next deconvolution layer, by this way, we want to integrate the global information of speech sequence better. The experimental results show the network model (RES-CRN) we introduce can achieve better performance than LSTM without residual and overlaying LSTM simply in the original CRN in terms of scale-invariant signal-to-distortion ratio (SI-SNR), speech quality (PESQ), and intelligibility (STOI).

Keywords: convolutional-recurrent neural networks, speech enhancement, residual LSTM, SI-SNR

Procedia PDF Downloads 187
3469 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 293
3468 Tribological Behavior of Pongamia Oil Based Biodiesel Blended Lubricant at Different Load

Authors: Yashvir Singh, Amneesh Singla, Swapnil Bhurat

Abstract:

Around the globe, there is demand for the development of bio-based lubricant which will be biodegradable, non toxic, and environmentally-friendly. This paper outlines the friction and wear characteristics of ponagamia biodiesel contaminated bio-lubricant by using pin-on-disc tribometer. To formulate the bio-lubricants, Ponagamia oil based biodiesel were blended in the ratios 5, 10, and 20% by volume with the base lubricant SAE 20 W 40. Tribological characteristics of these blends were carried out at 2.5 m/s sliding velocity and loads applied were 50, 100, 150 N. Experimental results showed that the lubrication regime that occurred during the test was boundary lubrication while the main wear mechanisms was the adhesive wear. During testing, the lowest wear was found with the addition of 5 and 10% Ponagamia oil based biodiesel, and above this contamination, the wear rate was increased considerably. The addition of 5 and 10% Ponagamia oil based biodiesel with the base lubricant acted as a very good lubricant additive which reduced the friction and wear rate during the test. It has been concluded that the PBO 5 and PBO 10 can act as an alternative lubricant to increase the mechanical efficiency at 2.5 m/s sliding velocity and contribute in reduction of dependence on the petroleum based products.

Keywords: friction, load, pongamia oil blend, sliding velocity, wear

Procedia PDF Downloads 304
3467 Efficiency of Maritime Simulator Training in Oil Spill Response Competence Development

Authors: Antti Lanki, Justiina Halonen, Juuso Punnonen, Emmi Rantavuo

Abstract:

Marine oil spill response operation requires extensive vessel maneuvering and navigation skills. At-sea oil containment and recovery include both single vessel and multi-vessel operations. Towing long oil containment booms that are several hundreds of meters in length, is a challenge in itself. Boom deployment and towing in multi-vessel configurations is an added challenge that requires precise coordination and control of the vessels. Efficient communication, as a prerequisite for shared situational awareness, is needed in order to execute the response task effectively. To gain and maintain adequate maritime skills, practical training is needed. Field exercises are the most effective way of learning, but especially the related vessel operations are resource-intensive and costly. Field exercises may also be affected by environmental limitations such as high sea-state or other adverse weather conditions. In Finland, the seasonal ice-coverage also limits the training period to summer seasons only. In addition, environmental sensitiveness of the sea area restricts the use of real oil or other target substances. This paper examines, whether maritime simulator training can offer a complementary method to overcome the training challenges related to field exercises. The objective is to assess the efficiency and the learning impact of simulator training, and the specific skills that can be trained most effectively in simulators. This paper provides an overview of learning results from two oil spill response pilot courses, in which maritime navigational bridge simulators were used to train the oil spill response authorities. The simulators were equipped with an oil spill functionality module. The courses were targeted at coastal Fire and Rescue Services responsible for near shore oil spill response in Finland. The competence levels of the participants were surveyed before and after the course in order to measure potential shifts in competencies due to the simulator training. In addition to the quantitative analysis, the efficiency of the simulator training is evaluated qualitatively through feedback from the participants. The results indicate that simulator training is a valid and effective method for developing marine oil spill response competencies that complement traditional field exercises. Simulator training provides a safe environment for assessing various oil containment and recovery tactics. One of the main benefits of the simulator training was found to be the immediate feedback the spill modelling software provides on the oil spill behaviour as a reaction to response measures.

Keywords: maritime training, oil spill response, simulation, vessel manoeuvring

Procedia PDF Downloads 157
3466 The Use of Performance Indicators for Evaluating Models of Drying Jackfruit (Artocarpus heterophyllus L.): Page, Midilli, and Lewis

Authors: D. S. C. Soares, D. G. Costa, J. T. S., A. K. S. Abud, T. P. Nunes, A. M. Oliveira Júnior

Abstract:

Mathematical models of drying are used for the purpose of understanding the drying process in order to determine important parameters for design and operation of the dryer. The jackfruit is a fruit with high consumption in the Northeast and perishability. It is necessary to apply techniques to improve their conservation for longer in order to diffuse it by regions with low consumption. This study aimed to analyse several mathematical models (Page, Lewis, and Midilli) to indicate one that best fits the conditions of convective drying process using performance indicators associated with each model: accuracy (Af) and noise factors (Bf), mean square error (RMSE) and standard error of prediction (% SEP). Jackfruit drying was carried out in convective type tray dryer at a temperature of 50°C for 9 hours. It is observed that the model Midili was more accurate with Af: 1.39, Bf: 1.33, RMSE: 0.01%, and SEP: 5.34. However, the use of the Model Midilli is not appropriate for purposes of control process due to need four tuning parameters. With the performance indicators used in this paper, the Page model showed similar results with only two parameters. It is concluded that the best correlation between the experimental and estimated data is given by the Page’s model.

Keywords: drying, models, jackfruit, biotechnology

Procedia PDF Downloads 371
3465 A Practical and Efficient Evaluation Function for 3D Model Based Vehicle Matching

Authors: Yuan Zheng

Abstract:

3D model-based vehicle matching provides a new way for vehicle recognition, localization and tracking. Its key is to construct an evaluation function, also called fitness function, to measure the degree of vehicle matching. The existing fitness functions often poorly perform when the clutter and occlusion exist in traffic scenarios. In this paper, we present a practical and efficient fitness function. Unlike the existing evaluation functions, the proposed fitness function is to study the vehicle matching problem from both local and global perspectives, which exploits the pixel gradient information as well as the silhouette information. In view of the discrepancy between 3D vehicle model and real vehicle, a weighting strategy is introduced to differently treat the fitting of the model’s wireframes. Additionally, a normalization operation for the model’s projection is performed to improve the accuracy of the matching. Experimental results on real traffic videos reveal that the proposed fitness function is efficient and robust to the cluttered background and partial occlusion.

Keywords: 3D-2D matching, fitness function, 3D vehicle model, local image gradient, silhouette information

Procedia PDF Downloads 386
3464 Experimental Investigation of Air Gap Membrane Distillation System with Heat Recovery

Authors: Yasser Elhenaw, A. Farag, Mohamed El-Ghandour, M. Shatat, G. H. Moustafa

Abstract:

This study investigates the performance of two spiral-wound Air Gap Membrane Distillation (AGMD) units. These units are connected in two different configurations in order to be tested and compared experimentally. In AGMD, the coolant water is used to condensate water vapor leaving membrane via condensing plate. The rejected cooling water has a relativity high temperature which can be used, depending on operation parameters, to increase the thermal efficiency and water productivity. In the first configuration, the seawater feed flows parallel and equally through both units then rejected. The coolant water is divided into the two units, and the heat source is divided into the two heat exchangers. In the second one, only the feed of the first unit is heated while the cooling rejected from the unit is used in heating the feed to the second. The performance of the system, estimated by the water productivity as well as the Gain Output Ratio (GOR), is measured for the two configurations at different feed flow rates, temperatures and salinities. The results show that at steady state condition, the heat recovery configurations lead to an increase in water productivity by 25%.

Keywords: membrane distillation, heat transfer, heat recovery, desalination

Procedia PDF Downloads 254
3463 The Effects of Combination of Melatonin with and without Zinc on Gonadotropin Hormones in Female Rats

Authors: Fariba Rahimi, Morteza Zendedel, Mohammad Jaafar Rezaee, Bita Vazir, Shahin Fakour

Abstract:

The present study was carried out to investigate the effect of melatonin (Mel) with and without zinc (Zn) on the gonadotropin hormones, also thyroid (T3 and T4) hormone concentration in female rats. A total of 40 adult female rats were randomly grouped into five treatment groups, each of 2 rats in a Completely Randomized Design (CRD) entire research time. Daily was treated by gavage with Zn and melatonin as follows: T1 (control1, basal diet), T2 (control 2, treated with normal saline) and other experimental groups, including T3, T4 and T5, were treated with a dose of zinc (40 ppm), melatonin (5 mg/kg), and combination zinc plus melatonin with the same level, respectively. Blood FSH and LH concentrations were measured. The result showed no significant differences between treatments in FSH and LH levels. The estrogen and progesterone and TSH levels in rats that received 5 mg of melatonin per day were higher than in other groups but not statistically significant (P>0.05). However, T3 (thyroid) concentration significantly (P<0.05) decreased in the group that received 40 mg/zinc per Kg compared to other groups. No significant (P>0.05) difference was detected among treatments in T4 levels. In conclusion, except for T3, had no significant (P>0.05) effect on another parameter in the female rats that received melatonin or zinc and a blend of melatonin and Zn.

Keywords: zinc, melatonin, hormone, rat

Procedia PDF Downloads 96
3462 Density Measurement of Mixed Refrigerants R32+R1234yf and R125+R290 from 0°C to 100°C and at Pressures up to 10 MPa

Authors: Xiaoci Li, Yonghua Huang, Hui Lin

Abstract:

Optimization of the concentration of components in mixed refrigerants leads to potential improvement of either thermodynamic cycle performance or safety performance of heat pumps and refrigerators. R32+R1234yf and R125+R290 are two promising binary mixed refrigerants for the application of heat pumps working in the cold areas. The p-ρ-T data of these mixtures are one of the fundamental and necessary properties for design and evaluation of the performance of the heat pumps. Although the property data of mixtures can be predicted by the mixing models based on the pure substances incorporated in programs such as the NIST database Refprop, direct property measurement will still be helpful to reveal the true state behaviors and verify the models. Densities of the mixtures of R32+R1234yf an d R125+R290 are measured by an Anton Paar U shape oscillating tube digital densimeter DMA-4500 in the range of temperatures from 0°C to 100 °C and pressures up to 10 MPa. The accuracy of the measurement reaches 0.00005 g/cm³. The experimental data are compared with the predictions by Refprop in the corresponding range of pressure and temperature.

Keywords: mixed refrigerant, density measurement, densimeter, thermodynamic property

Procedia PDF Downloads 286
3461 Discrete Group Search Optimizer for the Travelling Salesman Problem

Authors: Raed Alnajjar, Mohd Zakree, Ahmad Nazri

Abstract:

In this study, we apply Discrete Group Search Optimizer (DGSO) for solving Traveling Salesman Problem (TSP). The DGSO is a nature inspired optimization algorithm that imitates the animal behavior, especially animal searching behavior. The proposed DGSO uses a vector representation and some discrete operators, such as destruction, construction, differential evolution, swap and insert. The TSP is a well-known hard combinatorial optimization problem, which seeks to find the shortest path among numbers of cities. The performance of the proposed DGSO is evaluated and tested on benchmark instances which listed in LIBTSP dataset. The experimental results show that the performance of the proposed DGSO is comparable with the other methods in the state of the art for some instances. The results show that DGSO outperform Ant Colony System (ACS) in some instances whilst outperform other metaheuristic in most instances. In addition to that, the new results obtained a number of optimal solutions and some best known results. DGSO was able to obtain feasible and good quality solution across all dataset.

Keywords: discrete group search optimizer (DGSO); Travelling salesman problem (TSP); Variable neighborhood search(VNS)

Procedia PDF Downloads 315
3460 South African Breast Cancer Mutation Spectrum: Pitfalls to Copy Number Variation Detection Using Internationally Designed Multiplex Ligation-Dependent Probe Amplification and Next Generation Sequencing Panels

Authors: Jaco Oosthuizen, Nerina C. Van Der Merwe

Abstract:

The National Health Laboratory Services in Bloemfontien has been the diagnostic testing facility for 1830 patients for familial breast cancer since 1997. From the cohort, 540 were comprehensively screened using High-Resolution Melting Analysis or Next Generation Sequencing for the presence of point mutations and/or indels. Approximately 90% of these patients stil remain undiagnosed as they are BRCA1/2 negative. Multiplex ligation-dependent probe amplification was initially added to screen for copy number variation detection, but with the introduction of next generation sequencing in 2017, was substituted and is currently used as a confirmation assay. The aim was to investigate the viability of utilizing internationally designed copy number variation detection assays based on mostly European/Caucasian genomic data for use within a South African context. The multiplex ligation-dependent probe amplification technique is based on the hybridization and subsequent ligation of multiple probes to a targeted exon. The ligated probes are amplified using conventional polymerase chain reaction, followed by fragment analysis by means of capillary electrophoresis. The experimental design of the assay was performed according to the guidelines of MRC-Holland. For BRCA1 (P002-D1) and BRCA2 (P045-B3), both multiplex assays were validated, and results were confirmed using a secondary probe set for each gene. The next generation sequencing technique is based on target amplification via multiplex polymerase chain reaction, where after the amplicons are sequenced parallel on a semiconductor chip. Amplified read counts are visualized as relative copy numbers to determine the median of the absolute values of all pairwise differences. Various experimental parameters such as DNA quality, quantity, and signal intensity or read depth were verified using positive and negative patients previously tested internationally. DNA quality and quantity proved to be the critical factors during the verification of both assays. The quantity influenced the relative copy number frequency directly whereas the quality of the DNA and its salt concentration influenced denaturation consistency in both assays. Multiplex ligation-dependent probe amplification produced false positives due to ligation failure when ligation was inhibited due to a variant present within the ligation site. Next generation sequencing produced false positives due to read dropout when primer sequences did not meet optimal multiplex binding kinetics due to population variants in the primer binding site. The analytical sensitivity and specificity for the South African population have been proven. Verification resulted in repeatable reactions with regards to the detection of relative copy number differences. Both multiplex ligation-dependent probe amplification and next generation sequencing multiplex panels need to be optimized to accommodate South African polymorphisms present within the genetically diverse ethnic groups to reduce the false copy number variation positive rate and increase performance efficiency.

Keywords: familial breast cancer, multiplex ligation-dependent probe amplification, next generation sequencing, South Africa

Procedia PDF Downloads 220
3459 Keyloggers Prevention with Time-Sensitive Obfuscation

Authors: Chien-Wei Hung, Fu-Hau Hsu, Chuan-Sheng Wang, Chia-Hao Lee

Abstract:

Nowadays, the abuse of keyloggers is one of the most widespread approaches to steal sensitive information. In this paper, we propose an On-Screen Prompts Approach to Keyloggers (OSPAK) and its analysis, which is installed in public computers. OSPAK utilizes a canvas to cue users when their keystrokes are going to be logged or ignored by OSPAK. This approach can protect computers against recoding sensitive inputs, which obfuscates keyloggers with letters inserted among users' keystrokes. It adds a canvas below each password field in a webpage and consists of three parts: two background areas, a hit area and a moving foreground object. Letters at different valid time intervals are combined in accordance with their time interval orders, and valid time intervals are interleaved with invalid time intervals. It utilizes animation to visualize valid time intervals and invalid time intervals, which can be integrated in a webpage as a browser extension. We have tested it against a series of known keyloggers and also performed a study with 95 users to evaluate how easily the tool is used. Experimental results made by volunteers show that OSPAK is a simple approach.

Keywords: authentication, computer security, keylogger, privacy, information leakage

Procedia PDF Downloads 110
3458 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process

Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa

Abstract:

Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.

Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas

Procedia PDF Downloads 81
3457 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method

Authors: Dangut Maren David, Skaf Zakwan

Abstract:

Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.

Keywords: prognostics, data-driven, imbalance classification, deep learning

Procedia PDF Downloads 161
3456 Optimization of Surface Finish in Milling Operation Using Live Tooling via Taguchi Method

Authors: Harish Kumar Ponnappan, Joseph C. Chen

Abstract:

The main objective of this research is to optimize the surface roughness of a milling operation on AISI 1018 steel using live tooling on a HAAS ST-20 lathe. In this study, Taguchi analysis is used to optimize the milling process by investigating the effect of different machining parameters on surface roughness. The L9 orthogonal array is designed with four controllable factors with three different levels each and an uncontrollable factor, resulting in 18 experimental runs. The optimal parameters determined from Taguchi analysis were feed rate – 76.2 mm/min, spindle speed 1150 rpm, depth of cut – 0.762 mm and 2-flute TiN coated high-speed steel as tool material. The process capability Cp and process capability index Cpk values were improved from 0.62 and -0.44 to 1.39 and 1.24 respectively. The average surface roughness values from the confirmation runs were 1.30 µ, decreasing the defect rate from 87.72% to 0.01%. The purpose of this study is to efficiently utilize the Taguchi design to optimize the surface roughness in a milling operation using live tooling.

Keywords: live tooling, surface roughness, taguchi analysis, CNC milling operation, CNC turning operation

Procedia PDF Downloads 129
3455 A Procedure for Post-Earthquake Damage Estimation Based on Detection of High-Frequency Transients

Authors: Aleksandar Zhelyazkov, Daniele Zonta, Helmut Wenzel, Peter Furtner

Abstract:

In the current research structural health monitoring is considered for addressing the critical issue of post-earthquake damage detection. A non-standard approach for damage detection via acoustic emission is presented - acoustic emissions are monitored in the low frequency range (up to 120 Hz). Such emissions are termed high-frequency transients. Further a damage indicator defined as the Time-Ratio Damage Indicator is introduced. The indicator relies on time-instance measurements of damage initiation and deformation peaks. Based on the time-instance measurements a procedure for estimation of the maximum drift ratio is proposed. Monitoring data is used from a shaking-table test of a full-scale reinforced concrete bridge pier. Damage of the experimental column is successfully detected and the proposed damage indicator is calculated.

Keywords: acoustic emission, damage detection, shaking table test, structural health monitoring

Procedia PDF Downloads 219
3454 Anti-Oxidant and Anti-Bacterial Properties of Camellia sinensis, Tea Plant

Authors: Rini Jarial, Puranjan Mishra, Lakhveer Singh, Sveta Thakur, A. W. Zularisam, Mimi Sakinah

Abstract:

The aim of the present study was to assess the biological properties of Camellia sinensis and to identify its functional compounds. The methanolic leaves-extract (MLE) of commercial green tea (Camellia sinensis) was assessed for anti-bacterial activities by measuring inhibition zones against a panel of pathogenic bacterial strains using agar diffusion method. The flavonoid (5.0 to 8.0 mg/ml) and protein content (10 to 15 mg/ml) of the MLE were recorded. MLE at a concentration of 25 μg/ml showed marked anti-bacterial activity against all bacterial strains (11-30 mm zone of inhibition) and was maximum against Staphylococcus aureus (30 mm). The MLE of Camellia sinensis had the best MIC values of 2.25 and 0.56 mg/ml against S. aureus and Enterobacter sp., respectively. The MLE also possessed good anti-lipolytic activity (65%) against a Porcine pancreatic lipase (PPL) and cholesterol oxidase inhibition (79%). The present study provided strong experimental evidences that the MLE of Camellia sinensis is not only a potent source of natural anti-oxidants and anti-bacterial activity but also possesses efficient cholesterol degradation and anti-lipolytic activities that might be beneficial in the body weight management.

Keywords: anti-oxidant, anti-bacterial activity, anti-lipolytic activity, Camellia sinensis, phyto-chemicals

Procedia PDF Downloads 282
3453 Operational Challenges of Marine Fiber Reinforced Polymer Composite Structures Coupled with Piezoelectric Transducers

Authors: H. Ucar, U. Aridogan

Abstract:

Composite structures become intriguing for the design of aerospace, automotive and marine applications due to weight reduction, corrosion resistance and radar signature reduction demands and requirements. Studies on piezoelectric ceramic transducers (PZT) for diagnostics and health monitoring have gained attention for their sensing capabilities, however PZT structures are prone to fail in case of heavy operational loads. In this paper, we develop a piezo-based Glass Fiber Reinforced Polymer (GFRP) composite finite element (FE) model, validate with experimental setup, and identify the applicability and limitations of PZTs for a marine application. A case study is conducted to assess the piezo-based sensing capabilities in a representative marine composite structure. A FE model of the composite structure combined with PZT patches is developed, afterwards the response and functionality are investigated according to the sea conditions. Results of this study clearly indicate the blockers and critical aspects towards industrialization and wide-range use of PZTs for marine composite applications.

Keywords: FRP composite, operational challenges, piezoelectric transducers, FE modeling

Procedia PDF Downloads 166