Search results for: choice experiments (CE)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4860

Search results for: choice experiments (CE)

1740 3D Hybrid Multiphysics Lattice Boltzmann Model for Studying the Flow Behavior of Emulsions in Structured Rectangular Microchannels

Authors: Luma Al-Tamimi, Hassan Farhat, Wessam Hasan

Abstract:

A three-dimensional (3D) hybrid quasi-steady thermal lattice Boltzmann model is developed to couple the effects of surfactant, temperature, interfacial tension, and contact angle. This 3D model is an extended scheme of a previously introduced two-dimensional (2D) hybrid lattice Boltzmann model. The 3D model is used to study the combined multi-physics effects on emulsion systems flowing in rectangular microchannels with and without confinements, where the suspended phase is made of droplets, plugs, or a mixture of both. The simulation results show that emulsion systems with plugs as the suspended phase are more efficient than with droplets, whereas mixed systems that form large plugs through coalescence have even greater efficiency. The 3D contact angle model generates matching results to those of the 2D model, which were validated with experiments. Furthermore, the effects of various confinements on adhering single drop systems are investigated for delineating their influence on the power required for transporting the suspended phase through the channel. It is shown that the deeper the constriction is, the lower the system efficiency. Increasing the surfactant concentration or fluid temperature in a channel with confinement carries a substantial positive effect on oil droplet transportation.

Keywords: lattice Boltzmann method, thermal, contact angle, surfactants, high viscosity ratio, porous media

Procedia PDF Downloads 175
1739 Detection of Defects in CFRP by Ultrasonic IR Thermographic Method

Authors: W. Swiderski

Abstract:

In the paper introduced the diagnostic technique making possible the research of internal structures in composite materials reinforced fibres using in different applications. The main reason of damages in structures of these materials is the changing distribution of load in constructions in the lifetime. Appearing defect is largely complicated because of the appearance of disturbing of continuity of reinforced fibres, binder cracks and loss of fibres adhesiveness from binders. Defect in composite materials is usually more complicated than in metals. At present, infrared thermography is the most effective method in non-destructive testing composite. One of IR thermography methods used in non-destructive evaluation is vibrothermography. The vibrothermography is not a new non-destructive method, but the new solution in this test is use ultrasonic waves to thermal stimulation of materials. In this paper, both modelling and experimental results which illustrate the advantages and limitations of ultrasonic IR thermography in inspecting composite materials will be presented. The ThermoSon computer program for computing 3D dynamic temperature distribuions in anisotropic layered solids with subsurface defects subject to ulrasonic stimulation was used to optimise heating parameters in the detection of subsurface defects in composite materials. The program allows for the analysis of transient heat conduction and ultrasonic wave propagation phenomena in solids. The experiments at MIAT were fulfilled by means of FLIR SC 7600 IR camera. Ultrasonic stimulation was performed with the frequency from 15 kHz to 30 kHz with maximum power up to 2 kW.

Keywords: composite material, ultrasonic, infrared thermography, non-destructive testing

Procedia PDF Downloads 295
1738 Influence of Agroforestry Trees Leafy Biomass and Nitrogen Fertilizer on Crop Growth Rate and Relative Growth Rate of Maize

Authors: A. B. Alarape, O. D. Aba

Abstract:

The use of legume tree pruning as mulch in agroforestry system is a common practice to maintain soil organic matter and improve soil fertility in the tropics. The study was conducted to determine the influence of agroforestry trees leafy biomass and nitrogen fertilizer on crop growth rate and relative growth rate of maize. The experiments were laid out as 3 x 4 x 2 factorial in a split-split plot design with three replicates. Control, biomass species (Parkia biglobosa and Albizia lebbeck) as main plots were considered, rates of nitrogen considered include (0, 40, 80, 120 kg N ha⁻¹) as sub-plots, and maize varieties (DMR-ESR-7 and 2009 EVAT) were used as sub-sub plots. Data were analyzed using descriptive and inferential statistics (ANOVA) at α = 0.05. Incorporation of leafy biomass was significant in 2015 on Relative Growth Rate (RGR), while nitrogen application was significant on Crop Growth Rate (CGR). 2009 EVAT had higher CGR in 2015 at 4-6 and 6-8 WAP. Incorporation of Albizia leaves enhanced the growth of maize than Parkia leaves. Farmers are, therefore, encouraged to use Albizia leaves as mulch to enrich their soil for maize production and most especially, in case of availability of inorganic fertilizers. Though, production of maize with biomass and application of 120 kg N ha⁻¹ will bring better growth of maize.

Keywords: agroforestry trees, fertilizer, growth, incorporation, leafy biomass

Procedia PDF Downloads 191
1737 Glycan Analyzer: Software to Annotate Glycan Structures from Exoglycosidase Experiments

Authors: Ian Walsh, Terry Nguyen-Khuong, Christopher H. Taron, Pauline M. Rudd

Abstract:

Glycoproteins and their covalently bonded glycans play critical roles in the immune system, cell communication, disease and disease prognosis. Ultra performance liquid chromatography (UPLC) coupled with mass spectrometry is conventionally used to qualitatively and quantitatively characterise glycan structures in a given sample. Exoglycosidases are enzymes that catalyze sequential removal of monosaccharides from the non-reducing end of glycans. They naturally have specificity for a particular type of sugar, its stereochemistry (α or β anomer) and its position of attachment to an adjacent sugar on the glycan. Thus, monitoring the peak movements (both in the UPLC and MS1) after application of exoglycosidases provides a unique and effective way to annotate sugars with high detail - i.e. differentiating positional and linkage isomers. Manual annotation of an exoglycosidase experiment is difficult and time consuming. As such, with increasing sample complexity and the number of exoglycosidases, the analysis could result in manually interpreting hundreds of peak movements. Recently, we have implemented pattern recognition software for automated interpretation of UPLC-MS1 exoglycosidase digestions. In this work, we explain the software, indicate how much time it will save and provide example usage showing the annotation of positional and linkage isomers in Immunoglobulin G, apolipoprotein J, and simple glycan standards.

Keywords: bioinformatics, automated glycan assignment, liquid chromatography, mass spectrometry

Procedia PDF Downloads 200
1736 Experimental and Numerical Studies on Earthquake Shear Rupture Generation

Authors: Louis N. Y. Wong

Abstract:

En-echelon fractures are commonly found in rocks, which appear as a special set of regularly oriented and spaced fractures. By using both experimental and numerical approaches, this study investigates the interaction among them, and how this interaction finally contributes to the development of a shear rupture (fault), especially in brittle natural rocks. Firstly, uniaxial compression tests are conducted on marble specimens containing en-echelon flaws. The latter is cut by using the water abrasive jet into the rock specimens. The fracturing processes of these specimens leading to the formation of a fault are observed in detail by the use of a high speed camera. The influences of the flaw geometry on the production of tensile cracks and shear cracks, which in turn dictate the coalescence patterns of the entire set of en-echelon flaws are comprehensively studied. Secondly, a numerical study based on a recently developed contact model, flat-joint contact model using the discrete element method (DEM) is carried out to model the present laboratory experiments. The numerical results provide a quantitative assessment of the interaction of en-echelon flaws. Particularly, the evolution of the stress field, as well as the characteristics of new crack initiation, propagation and coalescence associated with the generation of an eventual shear rupture are studied in detail. The numerical results are found to agree well with the experimental results obtained in both microscopic and macroscopic observations.

Keywords: discrete element method, en-echelon flaws, fault, marble

Procedia PDF Downloads 255
1735 The Influence of a Vertical Rotation on the Fluid Dynamics of Compositional Plumes

Authors: Khaled Suleiman Mohammed Al-Mashrafi

Abstract:

A compositional plume is a fluid flow in a directional channel of finite width in another fluid of different material composition. The study of the dynamics of compositional plumes plays an essential role in many real-life applications like industrial applications (e.g., iron casting), environmental applications (e.g., salt fingers and sea ice), and geophysical applications (e.g., solidification at the inner core boundary (ICB) of the Earth, and mantle plumes). The dynamics of compositional plumes have been investigated experimentally and theoretically. The experimental works observed that the plume flow seems to be stable, although some experiments showed that it can be unstable. At the same time, the theoretical investigations showed that the plume flow is unstable. This is found to be true even if the plume is subject to rotation or/and in the presence of a magnetic field and even if another plume of different composition is also present. It is noticeable that all the theoretical studies on the dynamics of compositional plumes are conducted in unbounded domains. The present work is to investigate theoretically the influence of vertical walls (boundaries) on the dynamics of compositional plumes in the absence/presence of a rotation field. The mathematical model of the dynamics of compositional plumes used the equations of continuity, motion, heat, concentration of light material, and state. It is found that the presence of boundaries has a strong influence on the basic state solution as well as the stability of the plume, particularly when the plume is close to the boundary, but the compositional plume remains unstable.

Keywords: compositional plumes, stability, bounded domain, vertical boundaries

Procedia PDF Downloads 31
1734 Pre-Drying Effects on the Quality of Frying Oil

Authors: Hasan Yalcin, Tugba Dursun Capar

Abstract:

Deep-fat frying causes desirable as well as undesirable changes in oil and potato, and changes the quality of the oil by hydrolysis, oxidation, and polymerization. The main objective of the present study was to investigate the pre-drying effects on the quality of both frying oil and potatoes. Prior to frying, potato slices (10 mm x10 mm x 30 mm) were air- dried at 60°C for 15, 30, 45, 60, 90, and 120 mins., respectively. Potato slices without the pre-drying treatment were considered as the control variable. Potato slices were fried in sunflower oil at 180°C for 5, 10, and 13 mins. The deep-frying experiments were repeated five times using the new potato slices in the same oil without oil replenishment. Samples of the fresh oil, together with those sampled at the end of successive frying operations (1th, 3th and 5th) were removed and analysed. Moisture content, colour and oil intake of the potato and colour, peroxide value (PV), free fatty acid (FFA), fatty acid composition and viscosity of the used oil were evaluated. The effect of frying time was also examined. Results show that pre-drying treatment had a significant effect on physicochemical properties and colour parameters of potato slices and frying oil. Pre-drying considerably decreased the oil absorption. The lowest oil absorption was found for the treatment that was pre-dried for 120, and fried for 5 min. The FFA levels decreased permanently for each pre-treatment throughout the frying period. All the pre-drying treatments had reached their maximum levels of FFA by the end of the frying procedures. The PV of the control and 60 min pre-dried sample decreased after the third frying. However, the PV of other samples increased constantly throughout the frying periods. Lastly, pre-drying did not affect the fatty acid composition of frying oil considerably when compared against previously unused oil.

Keywords: air-drying, deep-fat frying, moisture content oil uptake, quality

Procedia PDF Downloads 308
1733 Combating Malaria: A Drug Discovery Approach Using Thiazole Derivatives Against Prolific Parasite Enzyme PfPKG

Authors: Hari Bezwada, Michelle Cheon, Ryan Divan, Hannah Escritor, Michelle Kagramian, Isha Korgaonkar, Maya MacAdams, Udgita Pamidigantam, Richard Pilny, Eleanor Race, Angadh Singh, Nathan Zhang, LeeAnn Nguyen, Fina Liotta

Abstract:

Malaria is a deadly disease caused by the Plasmodium parasite, which continues to develop resistance to current antimalarial drugs. In this research project, the effectiveness of numerous thiazole derivatives was explored in inhibiting the PfPKG, a crucial part of the Plasmodium life cycle. This study involved the synthesis of six thiazole-derived amides to inhibit the PfPKG pathway. Nuclear Magnetic Resonance (NMR) spectroscopy and Infrared (IR) spectroscopy were used to characterize these compounds. Furthermore, AutoDocking software was used to predict binding affinities of these thiazole-derived amides in silico. In silico, compound 6 exhibited the highest predicted binding affinity to PfPKG, while compound 5 had the lowest affinity. Compounds 1-4 displayed varying degrees of predicted binding affinity. In-vitro, it was found that compound 4 had the best percent inhibition, while compound 5 had the worst percent inhibition. Overall, all six compounds had weak inhibition (approximately 30-39% at 10 μM), but these results provide a foundation for future drug discovery experiments.

Keywords: Medicinal Chemistry, Malaria, drug discovery, PfPKG, Thiazole, Plasmodium

Procedia PDF Downloads 98
1732 Mechanical Characterization of Porcine Skin with the Finite Element Method Based Inverse Optimization Approach

Authors: Djamel Remache, Serge Dos Santos, Michael Cliez, Michel Gratton, Patrick Chabrand, Jean-Marie Rossi, Jean-Louis Milan

Abstract:

Skin tissue is an inhomogeneous and anisotropic material. Uniaxial tensile testing is one of the primary testing techniques for the mechanical characterization of skin at large scales. In order to predict the mechanical behavior of materials, the direct or inverse analytical approaches are often used. However, in case of an inhomogeneous and anisotropic material as skin tissue, analytical approaches are not able to provide solutions. The numerical simulation is thus necessary. In this work, the uniaxial tensile test and the FEM (finite element method) based inverse method were used to identify the anisotropic mechanical properties of porcine skin tissue. The uniaxial tensile experiments were performed using Instron 8800 tensile machine®. The uniaxial tensile test was simulated with FEM, and then the inverse optimization approach (or the inverse calibration) was used for the identification of mechanical properties of the samples. Experimentally results were compared to finite element solutions. The results showed that the finite element model predictions of the mechanical behavior of the tested skin samples were well correlated with experimental results.

Keywords: mechanical skin tissue behavior, uniaxial tensile test, finite element analysis, inverse optimization approach

Procedia PDF Downloads 408
1731 The Coalescence Process of Droplet Pairs in Different Junctions

Authors: Xiang Wang, Yan Pang, Zhaomiao Liu

Abstract:

Droplet-based microfluidics have been studied extensively with the development of the Micro-Electro-Mechanical System (MEMS) which bears the advantages of high throughput, high efficiency, low cost and low polydispersity. Droplets, worked as versatile carriers, could provide isolated chambers as the internal dispersed phase is protected from the outside continuous phase. Droplets are used to add reagents to start or end bio-chemical reactions, to generate concentration gradients, to realize hydrate crystallization or protein analyses, while droplets coalescence acts as an important control technology. In this paper, deionized water is used as the dispersed phase, and several kinds of oil are used as the continuous phase to investigate the influence of the viscosity ratio of the two phases on the coalescence process. The microchannels are fabricated by coating a polydimethylsiloxane (PDMS) layer onto another PDMS flat plate after corona treatment. All newly made microchannels are rinsed with the continuous oil phase for hours before experiments to ensure the swelling fully developed. High-speed microscope system is used to document the serial videos with a maximum speed of 2000 frames per second. The critical capillary numbers (Ca*) of droplet pairs in various junctions are studied and compared. Ca* varies with different junctions or different liquids within the range of 0.002 to 0.01. However, droplets without extra control would have the problem of synchronism which reduces the coalescence efficiency.

Keywords: coalescence, concentration, critical capillary number, droplet pair, split

Procedia PDF Downloads 251
1730 Best Practices in Designing a Mentoring Programme for Soft Skills Development

Authors: D. Kokt, T. F. Dreyer

Abstract:

The main objective of the study was to design a mentoring programme aimed at developing the soft skills of mentors. The mentors are all employed by a multinational corporation. The company had a mentoring plan in place that did not yield the required results, especially related to the development of soft skills. This prompted the researchers to conduct an extensive literature review followed by a mixed methods approach to ascertain the best practices in developing the soft skills of mentors. The outcomes of the study led to the development of a structured mentoring programme using 25 modules to be completed by mentors. The design incorporated a blended modular approach using both face-to-face teaching and teaching supported by Information Communication Technology (ICT). Blended learning was ideal as the ICT component helped to minimise instructor-mentor physical contact as part of the health measures during the Covid-19 pandemic. The blended learning approach also allowed instructors and mentors an online or offline mode, so that mentors could have more time for creative and cooperative exercises. A range of delivery methodologies were spread out across the different modules to ensure mentor engagement and accelerate mentor development. This included concept development through in-person instructor-led training sessions, concept development through virtual instructor-led training sessions, simulations, case studies, e-learning, role plays, interactive learning using mentoring toolkits, and experiential learning through application. The mentor development journey included formal modular competency assessments. All modules contained post-competency assessment consisting of 10 questions (comprising of a combination of explanatory questions and multiple-choice questions) to ensure understanding and deal with identified competency gaps. The minimum pass mark for all modular competency assessments was 80%. Mentors were allowed to retake the assessment if they scored less than 80% until they demonstrated understanding at the required level.

Keywords: mentor, mentee, soft skills, mentor development, blended learning, modular approach

Procedia PDF Downloads 28
1729 Surface Pressure Distributions for a Forebody Using Pressure Sensitive Paint

Authors: Yi-Xuan Huang, Kung-Ming Chung, Ping-Han Chung

Abstract:

Pressure sensitive paint (PSP), which relies on the oxygen quenching of a luminescent molecule, is an optical technique used in wind-tunnel models. A full-field pressure pattern with low aerodynamic interference can be obtained, and it is becoming an alternative to pressure measurements using pressure taps. In this study, a polymer-ceramic PSP was used, using toluene as a solvent. The porous particle and polymer were silica gel (SiO₂) and RTV-118 (3g:7g), respectively. The compound was sprayed onto the model surface using a spray gun. The absorption and emission spectra for Ru(dpp) as a luminophore were respectively 441-467 nm and 597 nm. A Revox SLG-55 light source with a short-pass filter (550 nm) and a 14-bit CCD camera with a long-pass (600 nm) filter were used to illuminate PSP and to capture images. This study determines surface pressure patterns for a forebody of an AGARD B model in a compressible flow. Since there is no experimental data for surface pressure distributions available, numerical simulation is conducted using ANSYS Fluent. The lift and drag coefficients are calculated and in comparison with the data in the open literature. The experiments were conducted using a transonic wind tunnel at the Aerospace Science and Research Center, National Cheng Kung University. The freestream Mach numbers were 0.83, and the angle of attack ranged from -4 to 8 degree. Deviation between PSP and numerical simulation is within 5%. However, the effect of the setup of the light source should be taken into account to address the relative error.

Keywords: pressure sensitive paint, forebody, surface pressure, compressible flow

Procedia PDF Downloads 127
1728 Development of a Combustible Gas Detector with Two Sensor Modules to Enable Measuring Range of Low Concentration

Authors: Young Gyu Kim, Sangguk Ahn, Gyoutae Park, Hiesik Kim

Abstract:

In the gas industrial fields, there are many problems to detect extremely small amounts of combustible gas (CH₄) if a conventional semiconductor is used. Those reasons are that measuring is difficult at the low concentration level, the stabilization time is long, and an initial response time is slow. In this study, we propose a method to solve these issues using two specific sensors to overcome the circumstances of temperature and humidity. This idea is to combine a catalytic and a semiconductor type sensor and to utilize every advantage from every sensor’s characteristic. In order to achieve the goal, we reduced fluctuations of a gas sensor for temperature and humidity by applying designed circuits for sensing temperature and humidity. And we induced the best calibration line of gas sensors through adjusting a weight value corresponding to changeable patterns of temperature and humidity after their data are previously acquired and stored. We proposed and developed the gas leak detector using two sensor modules, which is first operated by a semiconductor sensor for measuring small gas quantities and second a catalytic type sensor is detected if measuring range of the first sensor is beyond. We conclusively verified characteristics of sharp sensitivity and fast response time against even at lower gas concentration level through experiments other than a conventional gas sensor. We think that our proposed idea is very useful if another gas leak is developed to enable measuring extremely small quantities of toxic and flammable gases.

Keywords: gas sensor, leak detector, lower concentration, and calibration

Procedia PDF Downloads 240
1727 Bioremoval of Malachite Green Dye from Aqueous Solution Using Marine Algae: Isotherm, Kinetic and Mechanistic Study

Authors: M. Jerold, V. Sivasubramanian

Abstract:

This study reports the removal of Malachite Green (MG) from simulated wastewater by using marine macro algae Ulva lactuca. Batch biosorption experiments were carried out to determine the biosorption capacity. The biosorption capacity was found to be maximum at pH 10. The effect of various other operation parameters such as biosorbent dosage, initial dye concentration, contact time and agitation was also investigated. The equilibrium attained at 120 min with 0.1 g/L of biosorbent. The isotherm experimental data fitted well with Langmuir Model with R² value of 0.994. The maximum Langmuir biosorption capacity was found to be 76.92 mg/g. Further, Langmuir separation factor RL value was found to be 0.004. Therefore, the adsorption is favorable. The biosorption kinetics of MG was found to follow pseudo second-order kinetic model. The mechanistic study revealed that the biosorption of malachite onto Ulva lactuca was controlled by film diffusion. The solute transfer in a solid-liquid adsorption process is characterized by the film diffusion and/or particle diffusion. Thermodynamic study shows ΔG° is negative indicates the feasibility and spontaneous nature for the biosorption of malachite green. The biosorbent was characterized using Scanning Electron Microscopy, Fourier Transform Infrared Spectroscopy, and elemental analysis (CHNS: Carbon, Hydrogen, Nitrogen, Sulphur). This study showed that Ulva lactuca can be used as promising biosorbent for the removal of MG from wastewater.

Keywords: biosorption, Ulva lactuca, wastewater, malachite green, isotherm, kinetics

Procedia PDF Downloads 157
1726 Ultra-Fast pH-Gradient Ion Exchange Chromatography for the Separation of Monoclonal Antibody Charge Variants

Authors: Robert van Ling, Alexander Schwahn, Shanhua Lin, Ken Cook, Frank Steiner, Rowan Moore, Mauro de Pra

Abstract:

Purpose: Demonstration of fast high resolution charge variant analysis for monoclonal antibody (mAb) therapeutics within 5 minutes. Methods: Three commercially available mAbs were used for all experiments. The charge variants of therapeutic mAbs (Bevacizumab, Cetuximab, Infliximab, and Trastuzumab) are analyzed on a strong cation exchange column with a linear pH gradient separation method. The linear gradient from pH 5.6 to pH 10.2 is generated over time by running a linear pump gradient from 100% Thermo Scientific™ CX-1 pH Gradient Buffer A (pH 5.6) to 100% CX-1 pH Gradient Buffer B (pH 10.2), using the Thermo Scientific™ Vanquish™ UHPLC system. Results: The pH gradient method is generally applicable to monoclonal antibody charge variant analysis. In conjunction with state-of-the-art column and UHPLC technology, ultra fast high-resolution separations are consistently achieved in under 5 minutes for all mAbs analyzed. Conclusion: The linear pH gradient method is a platform method for mAb charge variant analysis. The linear pH gradient method can be easily optimized to improve separations and shorten cycle times. Ultra-fast charge variant separation is facilitated with UHPLC that complements, and in some instances outperforms CE approaches in terms of both resolution and throughput.

Keywords: charge variants, ion exchange chromatography, monoclonal antibody, UHPLC

Procedia PDF Downloads 440
1725 Effective Nutrition Label Use on Smartphones

Authors: Vladimir Kulyukin, Tanwir Zaman, Sarat Kiran Andhavarapu

Abstract:

Research on nutrition label use identifies four factors that impede comprehension and retention of nutrition information by consumers: label’s location on the package, presentation of information within the label, label’s surface size, and surrounding visual clutter. In this paper, a system is presented that makes nutrition label use more effective for nutrition information comprehension and retention. The system’s front end is a smartphone application. The system’s back end is a four node Linux cluster for image recognition and data storage. Image frames captured on the smartphone are sent to the back end for skewed or aligned barcode recognition. When barcodes are recognized, corresponding nutrition labels are retrieved from a cloud database and presented to the user on the smartphone’s touchscreen. Each displayed nutrition label is positioned centrally on the touchscreen with no surrounding visual clutter. Wikipedia links to important nutrition terms are embedded to improve comprehension and retention of nutrition information. Standard touch gestures (e.g., zoom in/out) available on mainstream smartphones are used to manipulate the label’s surface size. The nutrition label database currently includes 200,000 nutrition labels compiled from public web sites by a custom crawler. Stress test experiments with the node cluster are presented. Implications for proactive nutrition management and food policy are discussed.

Keywords: mobile computing, cloud computing, nutrition label use, nutrition management, barcode scanning

Procedia PDF Downloads 373
1724 Comprehensive Evaluation of COVID-19 Through Chest Images

Authors: Parisa Mansour

Abstract:

The coronavirus disease 2019 (COVID-19) was discovered and rapidly spread to various countries around the world since the end of 2019. Computed tomography (CT) images have been used as an important alternative to the time-consuming RT. PCR test. However, manual segmentation of CT images alone is a major challenge as the number of suspected cases increases. Thus, accurate and automatic segmentation of COVID-19 infections is urgently needed. Because the imaging features of the COVID-19 infection are different and similar to the background, existing medical image segmentation methods cannot achieve satisfactory performance. In this work, we try to build a deep convolutional neural network adapted for the segmentation of chest CT images with COVID-19 infections. First, we maintain a large and novel chest CT image database containing 165,667 annotated chest CT images from 861 patients with confirmed COVID-19. Inspired by the observation that the boundary of an infected lung can be improved by global intensity adjustment, we introduce a feature variable block into the proposed deep CNN, which adjusts the global features of features to segment the COVID-19 infection. The proposed PV array can effectively and adaptively improve the performance of functions in different cases. We combine features of different scales by proposing a progressive atrocious space pyramid fusion scheme to deal with advanced infection regions with various aspects and shapes. We conducted experiments on data collected in China and Germany and showed that the proposed deep CNN can effectively produce impressive performance.

Keywords: chest, COVID-19, chest Image, coronavirus, CT image, chest CT

Procedia PDF Downloads 57
1723 New Variational Approach for Contrast Enhancement of Color Image

Authors: Wanhyun Cho, Seongchae Seo, Soonja Kang

Abstract:

In this work, we propose a variational technique for image contrast enhancement which utilizes global and local information around each pixel. The energy functional is defined by a weighted linear combination of three terms which are called on a local, a global contrast term and dispersion term. The first one is a local contrast term that can lead to improve the contrast of an input image by increasing the grey-level differences between each pixel and its neighboring to utilize contextual information around each pixel. The second one is global contrast term, which can lead to enhance a contrast of image by minimizing the difference between its empirical distribution function and a cumulative distribution function to make the probability distribution of pixel values becoming a symmetric distribution about median. The third one is a dispersion term that controls the departure between new pixel value and pixel value of original image while preserving original image characteristics as well as possible. Second, we derive the Euler-Lagrange equation for true image that can achieve the minimum of a proposed functional by using the fundamental lemma for the calculus of variations. And, we considered the procedure that this equation can be solved by using a gradient decent method, which is one of the dynamic approximation techniques. Finally, by conducting various experiments, we can demonstrate that the proposed method can enhance the contrast of colour images better than existing techniques.

Keywords: color image, contrast enhancement technique, variational approach, Euler-Lagrang equation, dynamic approximation method, EME measure

Procedia PDF Downloads 449
1722 Complete Ensemble Empirical Mode Decomposition with Adaptive Noise Temporal Convolutional Network for Remaining Useful Life Prediction of Lithium Ion Batteries

Authors: Jing Zhao, Dayong Liu, Shihao Wang, Xinghua Zhu, Delong Li

Abstract:

Uhumanned Underwater Vehicles generally operate in the deep sea, which has its own unique working conditions. Lithium-ion power batteries should have the necessary stability and endurance for use as an underwater vehicle’s power source. Therefore, it is essential to accurately forecast how long lithium-ion batteries will last in order to maintain the system’s reliability and safety. In order to model and forecast lithium battery Remaining Useful Life (RUL), this research suggests a model based on Complete Ensemble Empirical Mode Decomposition with Adaptive noise-Temporal Convolutional Net (CEEMDAN-TCN). In this study, two datasets, NASA and CALCE, which have a specific gap in capacity data fluctuation, are used to verify the model and examine the experimental results in order to demonstrate the generalizability of the concept. The experiments demonstrate the network structure’s strong universality and ability to achieve good fitting outcomes on the test set for various battery dataset types. The evaluation metrics reveal that the CEEMDAN-TCN prediction performance of TCN is 25% to 35% better than that of a single neural network, proving that feature expansion and modal decomposition can both enhance the model’s generalizability and be extremely useful in industrial settings.

Keywords: lithium-ion battery, remaining useful life, complete EEMD with adaptive noise, temporal convolutional net

Procedia PDF Downloads 154
1721 The Theology of a Muslim Artist: Tawfiq al-Hakim

Authors: Abdul Rahman Chamseddine

Abstract:

Tawfiq al-Hakim remains one of the most prominent playwrights in his native in Egypt, and in the broader Arab world. His works, at the time of their release, drew international attention and acclaim. His first 1933 masterpiece Ahl al-Kahf (The People of the Cave) especially, garnered fame and recognition in both Europe and the Arab world. Borrowing its title from the Qur’anic Sura, al-Hakim’s play relays the untold story of the life of those 'three saints' after they wake up from their prolonged sleep. The playwright’s selection of topics upon which to base his works displays a deep appreciation of Arabic and Islamic heritage. Al-Hakim was clearly influenced by Islam, to such a degree that he wrote the biography of the Prophet Muhammad in 1936 very early in his career. Knowing that Al-Hakim was preceded by many poets and creative writers in writing the Prophet Muhammad’s biography. Notably like Al-Barudi, Ahmad Shawqi, Haykal, Al-‘Aqqad, and Taha Husayn who have had their own ways in expressing their views of the Prophet Muhammad. The attempt to understand the concern of all those renaissance men and others in the person of the Prophet would be indispensable in this study. This project will examine the reasons behind al-Hakim’s choice to draw upon these particular texts, embedded as they are in the context of Arabic and Islamic heritage, and how the use of traditional texts serves his contemporary goals. The project will also analyze the image of Islam in al-Hakim’s imagination. Elsewhere, he envisions letters or conversations between God and himself, which offers a window into understanding the powerful impact of the Divine on Tawfiq al-Hakim, one that informs his literature and merits further scholarly attention. His works occupying a major rank in Arabic literature, does not reveal Al-Hakim solely but the unquestioned assumptions operative in the life of his community, its mental make-up and its attitudes. Furthermore, studying the reception of works that touch on sensitive issues, like writing a letter to God, in Al-Hakim’s historical context would be of a great significance in the process of comprehending the mentality of the Muslim community at that time.

Keywords: Arabic language, Arabic literature, Arabic theology, modern Arabic literature

Procedia PDF Downloads 366
1720 Destination Port Detection For Vessels: An Analytic Tool For Optimizing Port Authorities Resources

Authors: Lubna Eljabu, Mohammad Etemad, Stan Matwin

Abstract:

Port authorities have many challenges in congested ports to allocate their resources to provide a safe and secure loading/ unloading procedure for cargo vessels. Selecting a destination port is the decision of a vessel master based on many factors such as weather, wavelength and changes of priorities. Having access to a tool which leverages AIS messages to monitor vessel’s movements and accurately predict their next destination port promotes an effective resource allocation process for port authorities. In this research, we propose a method, namely, Reference Route of Trajectory (RRoT) to assist port authorities in predicting inflow and outflow traffic in their local environment by monitoring Automatic Identification System (AIS) messages. Our RRoT method creates a reference route based on historical AIS messages. It utilizes some of the best trajectory similarity measure to identify the destination of a vessel using their recent movement. We evaluated five different similarity measures such as Discrete Fr´echet Distance (DFD), Dynamic Time Warping (DTW), Partial Curve Mapping (PCM), Area between two curves (Area) and Curve length (CL). Our experiments show that our method identifies the destination port with an accuracy of 98.97% and an fmeasure of 99.08% using Dynamic Time Warping (DTW) similarity measure.

Keywords: spatial temporal data mining, trajectory mining, trajectory similarity, resource optimization

Procedia PDF Downloads 121
1719 An Exploratory Study to Understand the Economic Opportunities from Climate Change

Authors: Sharvari Parikh

Abstract:

Climate change has always been looked upon as a threat. Increased use of fossil fuels, depletion of bio diversity, certain human activities, rising levels of Greenhouse Gas (GHG) emissions are the factors that have caused climate change. Climate change is creating new risks and aggravating the existing ones. The paper focuses on breaking the stereotypical perception of climate change and draws attention towards the constructive side of it. Researches around the world have concluded that climate change has provided us with many untapped opportunities. The next 15 years will be crucial, as it is in our hands whether we are able to grab these opportunities or just let the situation get worse. The world stands at a stage where we cannot think of making a choice between averting climate change and promoting growth and development. In fact, the solution to climate change itself has got economic opportunities. The data evidences from the paper show how we can create the opportunity to improve the lives of the world’s population at large through structural change which will promote environment friendly investments. Rising Investment in green energy and increased demand of climate friendly products has got ample of employment opportunities. Old technologies and machinery which are employed today lack efficiency and demand huge maintenance because of which we face high production cost. This can be drastically brought down by adaptation of Green technologies which are more accessible and affordable. Overall GDP of the world has been heavily affected in aggravating the problems arising out of increasing weather problems. Shifting to green economy can not only eliminate these costs but also build a sound economy. Accelerating the economy in direction of low-carbon future can lessen the burdens such as subsidies for fossil fuels, several public debts, unemployment, poverty, reduce healthcare expenses etc. It is clear that the world will be dragged into the ‘Darker phase’ if the current trends of fossil fuels and carbon are being consumed. Switching to Green economy is the only way in which we can lift the world from darker phase. Climate change has opened the gates for ‘Green and Clean economy’. It will also bring countries of the world together in achieving the common goal of Green Economy.

Keywords: climate change, economic opportunities, green economy, green technology

Procedia PDF Downloads 243
1718 Mechanical, Physical and Durability Properties of Cement Mortars Added with Recycled PP/PE-Based Food Packaging Waste Material

Authors: Livia Guerini, Christian Paglia

Abstract:

In Switzerland, only a fraction of plastic waste from food packaging is collected and recycled for further use in the food industry. Therefore, reusing these waste plastics for building applications can be an attractive alternative to disposal in order to reduce the problem of waste management and to make up for the depletion of raw materials needed for construction. In this study, experiments were conducted on the mechanical properties (compressive and flexural strength, elastic modulus), physical properties (density, workability, porosity, and water permeability) and durability (freeze/thaw resistance) of cementitious mortars with additions of recycled low-/high-density polyethylene (LDPE/HDPE)/ polypropylene (PP) regrind (addition of 5% and 10% by weight) and LDPE sheets (addition of 0.5% and 1.5% by weight) coming from food packaging. The results show that as the addition of plastic material increases, the density and mechanical properties of the mortars decrease compared to conventional ones. Porosity is similar in all the mixtures made, while the workability and the permeability are affected not only by the amount added but also by the shape of the plastic aggregate. Freeze/thaw resistance, on the other hand, is significantly higher in mortars with plastic aggregates than in traditional mortar. This feature may be interesting for the realization of outdoor mortars in cold environments.

Keywords: food packaging waste, durability properties, mechanical properties, mortar, recycled PE, recycled PP

Procedia PDF Downloads 145
1717 Improving Topic Quality of Scripts by Using Scene Similarity Based Word Co-Occurrence

Authors: Yunseok Noh, Chang-Uk Kwak, Sun-Joong Kim, Seong-Bae Park

Abstract:

Scripts are one of the basic text resources to understand broadcasting contents. Since broadcast media wields lots of influence over the public, tools for understanding broadcasting contents are more required. Topic modeling is the method to get the summary of the broadcasting contents from its scripts. Generally, scripts represent contents descriptively with directions and speeches. Scripts also provide scene segments that can be seen as semantic units. Therefore, a script can be topic modeled by treating a scene segment as a document. Because scripts consist of speeches mainly, however, relatively small co-occurrences among words in the scene segments are observed. This causes inevitably the bad quality of topics based on statistical learning method. To tackle this problem, we propose a method of learning with additional word co-occurrence information obtained using scene similarities. The main idea of improving topic quality is that the information that two or more texts are topically related can be useful to learn high quality of topics. In addition, by using high quality of topics, we can get information more accurate whether two texts are related or not. In this paper, we regard two scene segments are related if their topical similarity is high enough. We also consider that words are co-occurred if they are in topically related scene segments together. In the experiments, we showed the proposed method generates a higher quality of topics from Korean drama scripts than the baselines.

Keywords: broadcasting contents, scripts, text similarity, topic model

Procedia PDF Downloads 318
1716 Influence of Readability of Paper-Based Braille on Vertical and Horizontal Dot Spacing in Braille Beginners

Authors: K. Doi, T. Nishimura, H. Fujimoto

Abstract:

The number of people who become visually impaired and do not have sufficient tactile experiences has increased by various disease. Especially, many acquired visually impaired persons due to accidents, disorders, and aging cannot adequately read Braille. It is known that learning Braille requires a great deal of time and the acquisition of various skills. In our previous studies, we reported one of the problems in learning Braille. Concretely, the standard Braille size is too small for Braille beginners. And also we are short of the objective data regarding easily readable Braille size. Therefore, it is necessary to conduct various experiments for evaluating Braille size that would make learning easier for beginners. In this study, for the purpose of investigating easy-to-read conditions of vertical and horizontal dot spacing for beginners, we conducted one Braille reading experiment. In this our experiment, we prepared test pieces by use of our original Braille printer with controlling function of Braille size. We specifically considered Braille beginners with acquired visual impairments who were unfamiliar with Braille. Therefore, ten sighted subjects with no experience of reading Braille participated in this experiment. Size of vertical and horizontal dot spacing was following conditions. Each dot spacing was 2.0, 2.3, 2.5, 2.7, 2.9, 3.1mm. The subjects were asked to read one Braille character with controlled Braille size. The results of this experiment reveal that Braille beginners can read Braille accurately and quickly when both vertical and horizontal dot spacing are 3.1 mm or more. This knowledge will be helpful data in considering Braille size for acquired visually impaired persons.

Keywords: paper-based Braille, vertical and horizontal dot spacing, readability, acquired visual impairment, Braille beginner

Procedia PDF Downloads 178
1715 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction

Authors: Bastien Batardière, Joon Kwon

Abstract:

For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

Keywords: convex optimization, variance reduction, adaptive algorithms, loopless

Procedia PDF Downloads 71
1714 The Noun-Phrase Elements on the Usage of the Zero Article

Authors: Wen Zhen

Abstract:

Compared to content words, function words have been relatively overlooked by English learners especially articles. The article system, to a certain extent, becomes a resistance to know English better, driven by different elements. Three principal factors can be summarized in term of the nature of the articles when referring to the difficulty of the English article system. However, making the article system more complex are difficulties in the second acquisition process, for [-ART] learners have to create another category, causing even most non-native speakers at proficiency level to make errors. According to the sequences of acquisition of the English article, it is showed that the zero article is first acquired and in high inaccuracy. The zero article is often overused in the early stages of L2 acquisition. Although learners at the intermediate level move to underuse the zero article for they realize that the zero article does not cover any case, overproduction of the zero article even occurs among advanced L2 learners. The aim of the study is to investigate noun-phrase factors which give rise to incorrect usage or overuse of the zero article, thus providing suggestions for L2 English acquisition. Moreover, it enables teachers to carry out effective instruction that activate conscious learning of students. The research question will be answered through a corpus-based, data- driven approach to analyze the noun-phrase elements from the semantic context and countability of noun-phrases. Based on the analysis of the International Thurber Thesis corpus, the results show that: (1) Although context of [-definite,-specific] favored the zero article, both[-definite,+specific] and [+definite,-specific] showed less influence. When we reflect on the frequency order of the zero article , prototypicality plays a vital role in it .(2)EFL learners in this study have trouble classifying abstract nouns as countable. We can find that it will bring about overuse of the zero article when learners can not make clear judgements on countability altered from (+definite ) to (-definite).Once a noun is perceived as uncountable by learners, the choice would fall back on the zero article. These findings suggest that learners should be engaged in recognition of the countability of new vocabulary by explaining nouns in lexical phrases and explore more complex aspects such as analysis dependent on discourse.

Keywords: noun phrase, zero article, corpus, second language acquisition

Procedia PDF Downloads 253
1713 A Combined Approach Based on Artificial Intelligence and Computer Vision for Qualitative Grading of Rice Grains

Authors: Hemad Zareiforoush, Saeed Minaei, Ahmad Banakar, Mohammad Reza Alizadeh

Abstract:

The quality inspection of rice (Oryza sativa L.) during its various processing stages is very important. In this research, an artificial intelligence-based model coupled with computer vision techniques was developed as a decision support system for qualitative grading of rice grains. For conducting the experiments, first, 25 samples of rice grains with different levels of percentage of broken kernels (PBK) and degree of milling (DOM) were prepared and their qualitative grade was assessed by experienced experts. Then, the quality parameters of the same samples examined by experts were determined using a machine vision system. A grading model was developed based on fuzzy logic theory in MATLAB software for making a relationship between the qualitative characteristics of the product and its quality. Totally, 25 rules were used for qualitative grading based on AND operator and Mamdani inference system. The fuzzy inference system was consisted of two input linguistic variables namely, DOM and PBK, which were obtained by the machine vision system, and one output variable (quality of the product). The model output was finally defuzzified using Center of Maximum (COM) method. In order to evaluate the developed model, the output of the fuzzy system was compared with experts’ assessments. It was revealed that the developed model can estimate the qualitative grade of the product with an accuracy of 95.74%.

Keywords: machine vision, fuzzy logic, rice, quality

Procedia PDF Downloads 419
1712 NOx Prediction by Quasi-Dimensional Combustion Model of Hydrogen Enriched Compressed Natural Gas Engine

Authors: Anas Rao, Hao Duan, Fanhua Ma

Abstract:

The dependency on the fossil fuels can be minimized by using the hydrogen enriched compressed natural gas (HCNG) in the transportation vehicles. However, the NOx emissions of HCNG engines are significantly higher, and this turned to be its major drawback. Therefore, the study of NOx emission of HCNG engines is a very important area of research. In this context, the experiments have been performed at the different hydrogen percentage, ignition timing, air-fuel ratio, manifold-absolute pressure, load and engine speed. Afterwards, the simulation has been accomplished by the quasi-dimensional combustion model of HCNG engine. In order to investigate the NOx emission, the NO mechanism has been coupled to the quasi-dimensional combustion model of HCNG engine. The three NOx mechanism: the thermal NOx, prompt NOx and N2O mechanism have been used to predict NOx emission. For the validation purpose, NO curve has been transformed into NO packets based on the temperature difference of 100 K for the lean-burn and 60 K for stoichiometric condition. While, the width of the packet has been taken as the ratio of crank duration of the packet to the total burnt duration. The combustion chamber of the engine has been divided into three zones, with the zone equal to the product of summation of NO packets and space. In order to check the accuracy of the model, the percentage error of NOx emission has been evaluated, and it lies in the range of ±6% and ±10% for the lean-burn and stoichiometric conditions respectively. Finally, the percentage contribution of each NO formation has been evaluated.

Keywords: quasi-dimensional combustion , thermal NO, prompt NO, NO packet

Procedia PDF Downloads 251
1711 Received Signal Strength Indicator Based Localization of Bluetooth Devices Using Trilateration: An Improved Method for the Visually Impaired People

Authors: Muhammad Irfan Aziz, Thomas Owens, Uzair Khaleeq uz Zaman

Abstract:

The instantaneous and spatial localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles, is the most demanding and challenging issue faced by the navigation systems today. Since Bluetooth cannot utilize techniques like Time Difference of Arrival (TDOA) and Time of Arrival (TOA), it uses received signal strength indicator (RSSI) to measure Receive Signal Strength (RSS). The measurements using RSSI can be improved significantly by improving the existing methodologies related to RSSI. Therefore, the current paper focuses on proposing an improved method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the method, class 2 Bluetooth devices were used along with the development of a software. Experiments were then conducted to obtain surface plots that showed the signal interferences and other environmental effects. Finally, the results obtained show the surface plots for all Bluetooth modules used along with the strong and weak points depicted as per the color codes in red, yellow and blue. It was concluded that the suggested improved method of measuring RSS using trilateration helped to not only measure signal strength affectively but also highlighted how the signal strength can be influenced by atmospheric conditions such as noise, reflections, etc.

Keywords: Bluetooth, indoor/outdoor localization, received signal strength indicator, visually impaired

Procedia PDF Downloads 134