Search results for: mouse model
3347 Optimization of Personnel Selection Problems via Unconstrained Geometric Programming
Authors: Vildan Kistik, Tuncay Can
Abstract:
From a business perspective, cost and profit are two key factors for businesses. The intent of most businesses is to minimize the cost to maximize or equalize the profit, so as to provide the greatest benefit to itself. However, the physical system is very complicated because of technological constructions, rapid increase of competitive environments and similar factors. In such a system it is not easy to maximize profits or to minimize costs. Businesses must decide on the competence and competence of the personnel to be recruited, taking into consideration many criteria in selecting personnel. There are many criteria to determine the competence and competence of a staff member. Factors such as the level of education, experience, psychological and sociological position, and human relationships that exist in the field are just some of the important factors in selecting a staff for a firm. Personnel selection is a very important and costly process in terms of businesses in today's competitive market. Although there are many mathematical methods developed for the selection of personnel, unfortunately the use of these mathematical methods is rarely encountered in real life. In this study, unlike other methods, an exponential programming model was established based on the possibilities of failing in case the selected personnel was started to work. With the necessary transformations, the problem has been transformed into unconstrained Geometrical Programming problem and personnel selection problem is approached with geometric programming technique. Personnel selection scenarios for a classroom were established with the help of normal distribution and optimum solutions were obtained. In the most appropriate solutions, the personnel selection process for the classroom has been achieved with minimum cost.Keywords: geometric programming, personnel selection, non-linear programming, operations research
Procedia PDF Downloads 2673346 Screening the Growth Inhibition Mechanism of Sulfate-Reducing Bacteria by Chitosan/Lignosulfonate Nanocomposite in Seawater Media
Authors: K. Rasool
Abstract:
Sulfate-reducing bacteria (SRBs) induced biofilm formation is a global industrial concern due to its role in the development of microbial-induced corrosion (MIC). Herein, we have developed a biodegradable chitosan/lignosulfonate nanocomposite (CS@LS) as an efficient green biocide for the inhibition of SRBs biofilms. We investigated in detail the inhibition mechanism of SRBs by CS@LS in seawater media. Stable CS@LS-1:1 with 150–200 nm average size and zeta potential of + 34.25 mV was synthesized. The biocidal performance of CS@LS was evaluated by sulfate reduction profiles coupled with analysis of extracted extracellular polymeric substances (EPS) and lactate dehydrogenase (LDH) release assays. As the nanocomposite concentration was increased from 50 to 500 µg/mL, the specific sulfate reduction rate (SSRR) decreased from 0.278 to 0.036 g-sulfate/g-VSS*day showing a relative sulfate reduction inhibition of 86.64% as compared to that of control. Similarly, the specific organic uptake rate (SOUR) decreased from 0.082 to 0.039 0.036 g-TOC/g-VSS*day giving a relative co-substrate oxidation inhibition of 52.19% as compared to that of control. The SRBs spiked with 500 µg/mL CS@LS showed a reduction in cell viability to 1.5 × 106 MPN/mL. To assess the biosafety of the nanocomposite on the marine biota, the 72-hours acute toxicity assays using the zebrafish embryo model revealed that the LC50 for the CS@LS was 103.3 µg/mL. Thus, CS@LS can be classified as environmentally friendly. The nanocomposite showed long-term stability and excellent antibacterial properties against SRBs growth and is thus potentially useful for combating the problems of biofilm growth in harsh marine and aquatic environments.Keywords: green biocides, chitosan/lignosulfonate nanocomposite, SRBs, toxicity
Procedia PDF Downloads 1193345 Fixed Point Iteration of a Damped and Unforced Duffing's Equation
Authors: Paschal A. Ochang, Emmanuel C. Oji
Abstract:
The Duffing’s Equation is a second order system that is very important because they are fundamental to the behaviour of higher order systems and they have applications in almost all fields of science and engineering. In the biological area, it is useful in plant stem dependence and natural frequency and model of the Brain Crash Analysis (BCA). In Engineering, it is useful in the study of Damping indoor construction and Traffic lights and to the meteorologist it is used in the prediction of weather conditions. However, most Problems in real life that occur are non-linear in nature and may not have analytical solutions except approximations or simulations, so trying to find an exact explicit solution may in general be complicated and sometimes impossible. Therefore we aim to find out if it is possible to obtain one analytical fixed point to the non-linear ordinary equation using fixed point analytical method. We started by exposing the scope of the Duffing’s equation and other related works on it. With a major focus on the fixed point and fixed point iterative scheme, we tried different iterative schemes on the Duffing’s Equation. We were able to identify that one can only see the fixed points to a Damped Duffing’s Equation and not to the Undamped Duffing’s Equation. This is because the cubic nonlinearity term is the determining factor to the Duffing’s Equation. We finally came to the results where we identified the stability of an equation that is damped, forced and second order in nature. Generally, in this research, we approximate the solution of Duffing’s Equation by converting it to a system of First and Second Order Ordinary Differential Equation and using Fixed Point Iterative approach. This approach shows that for different versions of Duffing’s Equations (damped), we find fixed points, therefore the order of computations and running time of applied software in all fields using the Duffing’s equation will be reduced.Keywords: damping, Duffing's equation, fixed point analysis, second order differential, stability analysis
Procedia PDF Downloads 2903344 Assessment of the Impact of Trawling Activities on Marine Bottoms of Moroccan Atlantic
Authors: Rachida Houssa, Hassan Rhinane, Fadoumo Ali Malouw, Amina Oulmaalem
Abstract:
Since the early 70s, the Moroccan Atlantic sea was subjected to the pressure of the bottom trawling, one of the most destructive techniques seabed that cause havoc on fishing catch, nonselective, and responsible for more than half of all releases of fish around the world. The present paper aims to map and assess the impact of the activity of the bottom trawling of the Moroccan Atlantic coast. For this purpose, a dataset of thirty years, between 1962 and 1999, from foreign fishing vessels using bottom trawling, has been used and integrated in a GIS. To estimate the extent and the importance of the geographical distribution of the trawling effort, the Moroccan Atlantic area was divided into a grid of cells of 25 km2 (5x5 km). This grid was joined to the effort trawling data, creating a new entity with a table containing spatial overlay grid with the polygon of swept surfaces. This mapping model allowed to quantify the used fishing effort versus time and to generate the trace indicative of trawling efforts on the seabed. Indeed, for a given year, a grid cell may have a swept area equal to 0 (never been touched by the trawl) or 25 km2 (the trawled area is similar to the cell size) or may be 100 km2 indicating that for this year, the scanned surface is four times the cell area. The results show that the total cumulative sum of trawled area is approximately 28,738,326 km2, scattered throughout the Atlantic coast. 95% of the overall trawling effort is located in the southern zone, between 29°N and 20°30'N. Nearly 5% of the trawling effort is located in the northern coastal region, north of 33°N. The center area between 33°N and 29°N is the least swept by Russian commercial vessels because in this region the majority of the area is rocky, and non trawlable.Keywords: GIS, Moroccan Atlantic Ocean, seabed, trawling
Procedia PDF Downloads 3273343 Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models
Authors: Virender Singh, Mathew Rees, Simon Hampton, Sivaram Annadurai
Abstract:
Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy.Keywords: plant identification, CNN, image processing, vision transformer, classification
Procedia PDF Downloads 1023342 Reinforcement Learning for Robust Missile Autopilot Design: TRPO Enhanced by Schedule Experience Replay
Authors: Bernardo Cortez, Florian Peter, Thomas Lausenhammer, Paulo Oliveira
Abstract:
Designing missiles’ autopilot controllers have been a complex task, given the extensive flight envelope and the nonlinear flight dynamics. A solution that can excel both in nominal performance and in robustness to uncertainties is still to be found. While Control Theory often debouches into parameters’ scheduling procedures, Reinforcement Learning has presented interesting results in ever more complex tasks, going from videogames to robotic tasks with continuous action domains. However, it still lacks clearer insights on how to find adequate reward functions and exploration strategies. To the best of our knowledge, this work is a pioneer in proposing Reinforcement Learning as a framework for flight control. In fact, it aims at training a model-free agent that can control the longitudinal non-linear flight dynamics of a missile, achieving the target performance and robustness to uncertainties. To that end, under TRPO’s methodology, the collected experience is augmented according to HER, stored in a replay buffer and sampled according to its significance. Not only does this work enhance the concept of prioritized experience replay into BPER, but it also reformulates HER, activating them both only when the training progress converges to suboptimal policies, in what is proposed as the SER methodology. The results show that it is possible both to achieve the target performance and to improve the agent’s robustness to uncertainties (with low damage on nominal performance) by further training it in non-nominal environments, therefore validating the proposed approach and encouraging future research in this field.Keywords: Reinforcement Learning, flight control, HER, missile autopilot, TRPO
Procedia PDF Downloads 2623341 Theorizing about the Determinants of Sustainable Entrepreneurship Intention and Behavior
Authors: Mariella Pinna
Abstract:
Sustainable entrepreneurship is an innovative corporate approach to create value combining economic, social and environmental goals over time. In the last two decades, the interest in sustainable entrepreneurship has flourished thanks to its potential to answer the current challenges of sustainable development. As a result, scholars are increasingly interested in understanding the determinants of the intentions to become a sustainable entrepreneur and consistent behavior. To date, prior studies provided empirical evidence for the influence of attitudes, perceived feasibility and desirability, values, and personality traits on the decision-making process of becoming a sustainable entrepreneur. Conversely, scant effort has been provided to understand which factors inhibit sustainable entrepreneurial intentions and behaviors. Therefore a global understanding of the sustainable entrepreneurship decision-making process is missing. This paper contributes to the debate on sustainable entrepreneurship by proposing a conceptual model that combines the factors which are predicted to facilitate and hinder the proclivity of individuals to become sustainable entrepreneurs. More in particular, the proposed framework theorizes about the role of the characteristics of the prospective sustainable entrepreneur (e.g., socio-demographic, psychological, cultural), the positive antecedents (e.g., attitude, social feasibility and desirability, among others) and the negative precursors (e.g., neutralization) in influencing sustainable entrepreneurship intentions and subsequent behavior. The proposed framework is expected to shed further light on the decision-making process of becoming a sustainable entrepreneur, which in turn, is of practical relevance for public policy institutions and the society as a whole to enhance the favorable conditions to create new sustainable ventures.Keywords: sustainable entrepreneurship, entrepreneurial intentions, entrepreneurial decision-making, antecedents of entrepreneurial intention and behavior
Procedia PDF Downloads 2113340 Comparison of Elastic and Viscoelastic Modeling for Asphalt Concrete Surface Layer
Authors: Fouzieh Rouzmehr, Mehdi Mousavi
Abstract:
Hot mix asphalt concrete (HMAC) is a mixture of aggregates and bitumen. The primary ingredient that determines the mechanical properties of HMAC is the bitumen in it, which displays viscoelastic behavior under normal service conditions. For simplicity, asphalt concrete is considered an elastic material, but this is far from reality at high service temperatures and longer loading times. Viscoelasticity means that the material's stress-strain relationship depends on the strain rate and loading duration. The goal of this paper is to simulate the mechanical response of flexible pavements using linear elastic and viscoelastic modeling of asphalt concrete and predict pavement performance. Falling Weight Deflectometer (FWD) load will be simulated and the results for elastic and viscoelastic modeling will be evaluated. The viscoelastic simulation is performed by the Prony series, which will be modeled by using ANSYS software. Inflexible pavement design, tensile strain at the bottom of the surface layer and compressive strain at the top of the last layer plays an important role in the structural response of the pavement and they will imply the number of loads for fatigue (Nf) and rutting (Nd) respectively. The differences of these two modelings are investigated on fatigue cracking and rutting problem, which are the two main design parameters in flexible pavement design. Although the differences in rutting problem between the two models were negligible, in fatigue cracking, the viscoelastic model results were more accurate. Results indicate that modeling the flexible pavement with elastic material is efficient enough and gives acceptable results.Keywords: flexible pavement, asphalt, FEM, viscoelastic, elastic, ANSYS, modeling
Procedia PDF Downloads 1293339 Embedded System of Signal Processing on FPGA: Underwater Application Architecture
Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad
Abstract:
The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing
Procedia PDF Downloads 753338 Hidrothermal Alteration Study of Tangkuban Perahu Craters, and Its Implication to Geothermal Conceptual Model
Authors: Afy Syahidan Achmad
Abstract:
Tangkuban Perahu is located in West Java, Indonesia. It is active stratovolcano type and still showing hidrothermal activity. The main purpose of this study is to find correlation between subsurface structure and hidrothermal activity on the surface. Using topographic map, SRTM images, and field observation, geological condition and alteration area was mapped. Alteration sample analyzed trough petrographic analysis and X-Ray Diffraction (XRD) analysis. Altered rock in study area showing white-yellowish white colour, and texture changing variation from softening to hardening because of alteration by sillica and sulphur. Alteration mineral which can be observed in petrographic analysis and XRD analysis consist of crystobalite, anatase, alunite, and pyrite. This mineral assemblage showing advanced argillic alteration type with West-East alteration area orientation. Alteration area have correlation with manifestation occurance such as steam vents, solfatara, and warm to hot pools. Most of manifestation occured in main crater like Ratu Crater and Upas crater, and parasitic crater like Domas Crater and Jarian Crater. This manifestation indicates permeability in subsurface which can be created trough structural process with same orientation. For further study geophysics method such as Magneto Telluric (MT) and resistivity can be required to find permeability zone pattern in Tangkuban Perahu subsurface.Keywords: alteration, advanced argillic, Tangkuban Perahu, XRD, crystobalite, anatase, alunite, pyrite
Procedia PDF Downloads 4173337 Enhancement of Natural Convection Heat Transfer within Closed Enclosure Using Parallel Fins
Authors: F. A. Gdhaidh, K. Hussain, H. S. Qi
Abstract:
A numerical study of natural convection heat transfer in water filled cavity has been examined in 3D for single phase liquid cooling system by using an array of parallel plate fins mounted to one wall of a cavity. The heat generated by a heat source represents a computer CPU with dimensions of 37.5×37.5 mm mounted on substrate. A cold plate is used as a heat sink installed on the opposite vertical end of the enclosure. The air flow inside the computer case is created by an exhaust fan. A turbulent air flow is assumed and k-ε model is applied. The fins are installed on the substrate to enhance the heat transfer. The applied power energy range used is between 15- 40W. In order to determine the thermal behaviour of the cooling system, the effect of the heat input and the number of the parallel plate fins are investigated. The results illustrate that as the fin number increases the maximum heat source temperature decreases. However, when the fin number increases to critical value the temperature start to increase due to the fins are too closely spaced and that cause the obstruction of water flow. The introduction of parallel plate fins reduces the maximum heat source temperature by 10% compared to the case without fins. The cooling system maintains the maximum chip temperature at 64.68℃ when the heat input was at 40 W which is much lower than the recommended computer chips limit temperature of no more than 85℃ and hence the performance of the CPU is enhanced.Keywords: chips limit temperature, closed enclosure, natural convection, parallel plate, single phase liquid
Procedia PDF Downloads 2613336 Enabling Oral Communication and Accelerating Recovery: The Creation of a Novel Low-Cost Electroencephalography-Based Brain-Computer Interface for the Differently Abled
Authors: Rishabh Ambavanekar
Abstract:
Expressive Aphasia (EA) is an oral disability, common among stroke victims, in which the Broca’s area of the brain is damaged, interfering with verbal communication abilities. EA currently has no technological solutions and its only current viable solutions are inefficient or only available to the affluent. This prompts the need for an affordable, innovative solution to facilitate recovery and assist in speech generation. This project proposes a novel concept: using a wearable low-cost electroencephalography (EEG) device-based brain-computer interface (BCI) to translate a user’s inner dialogue into words. A low-cost EEG device was developed and found to be 10 to 100 times less expensive than any current EEG device on the market. As part of the BCI, a machine learning (ML) model was developed and trained using the EEG data. Two stages of testing were conducted to analyze the effectiveness of the device: a proof-of-concept and a final solution test. The proof-of-concept test demonstrated an average accuracy of above 90% and the final solution test demonstrated an average accuracy of above 75%. These two successful tests were used as a basis to demonstrate the viability of BCI research in developing lower-cost verbal communication devices. Additionally, the device proved to not only enable users to verbally communicate but has the potential to also assist in accelerated recovery from the disorder.Keywords: neurotechnology, brain-computer interface, neuroscience, human-machine interface, BCI, HMI, aphasia, verbal disability, stroke, low-cost, machine learning, ML, image recognition, EEG, signal analysis
Procedia PDF Downloads 1183335 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes
Authors: Jihad Daba, Jean-Pierre Dubois
Abstract:
Multi path fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper have utilized a Poisson modulated and weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multi-diversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent specular Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.Keywords: cellular communication, femto and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process
Procedia PDF Downloads 4463334 Triose Phosphate Utilisation at the (Sub)Foliar Scale Is Modulated by Whole-plant Source-sink Ratios and Nitrogen Budgets in Rice
Authors: Zhenxiang Zhou
Abstract:
The triose phosphate utilisation (TPU) limitation to leaf photosynthesis is a biochemical process concerning the sub-foliar carbon sink-source (im)balance, in which photorespiration-associated amino acids exports provide an additional outlet for carbon and increases leaf photosynthetic rate. However, whether this process is regulated by whole-plant sink-source relations and nitrogen budgets remains unclear. We address this question by model analyses of gas-exchange data measured on leaves at three growth stages of rice plants grown at two-nitrogen levels, where three means (leaf-colour modification, adaxial vs abaxial measurements, and panicle pruning) were explored to alter source-sink ratios. Higher specific leaf nitrogen (SLN) resulted in higher rates of TPU and also led to the TPU limitation occurring at a lower intercellular CO2 concentration. Photorespiratory nitrogen assimilation was greater in higher-nitrogen leaves but became smaller in cases associated with yellower-leaf modification, abaxial measurement, or panicle pruning. The feedback inhibition of panicle pruning on rates of TPU was not always observed because panicle pruning blocked nitrogen remobilisation from leaves to grains, and the increased SLN masked the feedback inhibition. The (sub)foliar TPU limitation can be modulated by whole-plant source-sink ratios and nitrogen budgets during rice grain filling, suggesting a close link between sub-foliar and whole-plant sink limitations.Keywords: triose phosphate utilization, sink limitation, panicle pruning, oryza sativa
Procedia PDF Downloads 893333 The Association between Masculinity and Anxiety in Canadian Men
Authors: Nikk Leavitt, Peter Kellett, Cheryl Currie, Richard Larouche
Abstract:
Background: Masculinity has been associated with poor mental health outcomes in adult men and is colloquially referred to as toxic. Masculinity is traditionally measured using the Male Role Norms Inventory, which examines behaviors that may be common in men but that are themselves associated with poor mental health regardless of gender (e.g., aggressiveness). The purpose of this study was to examine if masculinity is associated with generalized anxiety among men using this inventory vs. a man’s personal definition of it. Method: An online survey collected data from 1,200 men aged 18-65 across Canada in July 2022. Masculinity was measured using: 1) the Male Role Norms Inventory Short Form and 2) by asking men to self-define what being masculine means. Men were then asked to rate the extent they perceived themselves to be masculine on a scale of 1 to 10 based on their definition of the construct. Generalized anxiety disorder was measured using the GAD-7. Multiple linear regression was used to examine associations between each masculinity score and anxiety score, adjusting for confounders. Results: The masculinity score measured using the inventory was positively associated with increased anxiety scores among men (β = 0.02, p < 0.01). Masculinity subscales most strongly correlated with higher anxiety were restrictive emotionality (β = 0.29, p < 0.01) and dominance (β = 0.30, p < 0.01). When traditional masculinity was replaced by a man’s self-rated masculinity score in the model, the reverse association was found, with increasing masculinity resulting in a significantly reduced anxiety score (β = -0.13, p = 0.04). Discussion: These findings highlight the need to revisit the ways in which masculinity is defined and operationalized in research to better understand its impacts on men’s mental health. The findings also highlight the importance of allowing participants to self-define gender-based constructs, given they are fluid and socially constructed.Keywords: masculinity, generalized anxiety disorder, race, intersectionality
Procedia PDF Downloads 693332 Classifier for Liver Ultrasound Images
Authors: Soumya Sajjan
Abstract:
Liver cancer is the most common cancer disease worldwide in men and women, and is one of the few cancers still on the rise. Liver disease is the 4th leading cause of death. According to new NHS (National Health Service) figures, deaths from liver diseases have reached record levels, rising by 25% in less than a decade; heavy drinking, obesity, and hepatitis are believed to be behind the rise. In this study, we focus on Development of Diagnostic Classifier for Ultrasound liver lesion. Ultrasound (US) Sonography is an easy-to-use and widely popular imaging modality because of its ability to visualize many human soft tissues/organs without any harmful effect. This paper will provide an overview of underlying concepts, along with algorithms for processing of liver ultrasound images Naturaly, Ultrasound liver lesion images are having more spackle noise. Developing classifier for ultrasound liver lesion image is a challenging task. We approach fully automatic machine learning system for developing this classifier. First, we segment the liver image by calculating the textural features from co-occurrence matrix and run length method. For classification, Support Vector Machine is used based on the risk bounds of statistical learning theory. The textural features for different features methods are given as input to the SVM individually. Performance analysis train and test datasets carried out separately using SVM Model. Whenever an ultrasonic liver lesion image is given to the SVM classifier system, the features are calculated, classified, as normal and diseased liver lesion. We hope the result will be helpful to the physician to identify the liver cancer in non-invasive method.Keywords: segmentation, Support Vector Machine, ultrasound liver lesion, co-occurance Matrix
Procedia PDF Downloads 4083331 Kinetic Study on Extracting Lignin from Black Liquor Using Deep Eutectic Solvents
Authors: Fatemeh Saadat Ghareh Bagh, Srimanta Ray, Jerald Lalman
Abstract:
Lignin, the largest inventory of organic carbon with a high caloric energy value is a major component in woody and non-woody biomass. In pulping mills, a large amount of the lignin is burned for energy. At the same time, the phenolic structure of lignin enables it to be converted to value-added compounds.This study has focused on extracting lignin from black liquor using deep eutectic solvents (DESs). Therefore, three choline chloride (ChCl)-DESs paired with lactic acid (LA) (1:11), oxalic acid.2H₂O (OX) (1:4), and malic acid (MA) (1:3) were synthesized at 90oC and atmospheric pressure. The kinetics of lignin recovery from black liquor using DES was investigated at three moderate temperatures (338, 353, and 368 K) at time intervals from 30 to 210 min. The extracted lignin (acid soluble lignin plus Klason lignin) was characterized by Fourier transform infrared spectroscopy (FTIR). The FTIR studies included comparing the extracted lignin with a model Kraft lignin. The extracted lignin was characterized spectrophotometrically to determine the acid soluble lignin (ASL) [TAPPI UM 250] fraction and Klason lignin was determined gravimetrically using TAPPI T 222 om02. The lignin extraction reaction using DESs was modeled by first-order reaction kinetics and the activation energy of the process was determined. The ChCl:LA-DES recovered lignin was 79.7±2.1% at 368K and a DES:BL ratio of 4:1 (v/v). The quantity of lignin extracted for the control solvent, [emim][OAc], was 77.5+2.2%. The activation energy measured for the LA-DES system was 22.7 KJ mol⁻¹, while the activation energy for the OX-DES and MA-DES systems were 7.16 KJ·mol⁻¹ and 8.66 KJ·mol⁻¹ when the total lignin recovery was 75.4 ±0.9% and 62.4 ±1.4, % respectively.Keywords: black liquor, deep eutectic solvents, kinetics, lignin
Procedia PDF Downloads 1443330 Prevalence of Near Visual Impairment and Associated Factors among School Teachers in Gondar City, North West Ethiopia, 2022
Authors: Bersufekad Wubie
Abstract:
Introduction: Near visual impairment is presenting near visual acuity of the eye worse than N6 at a 40 cm distance. Teachers' regular duties, such as reading books, writing on the blackboard, and recognizing students' faces, need good near vision. If a teacher has near-visual impairment, the work output is unsatisfactory. Objective: The study was aimed to assess the prevalence and associated factors near vision impairment among school teachers at Gondar city Northwest Ethiopia, August 2022. Methods: To select 567 teachers in Gondar city schools, an institutional-based cross-sectional study design with a multistage sampling technique were used. The study was conducted in selected schools from May 1 to May 30, 2022. Trained data collectors used well-structured Amharic and English language questionnaires and ophthalmic instruments for examination. The collected data were checked for completeness and entered into Epi data version 4.6, then exported to SPSS version 26 for further analysis. A binary and multivariate logistic regression model was fitted. And associated factors of the outcome variable. Result: The prevalence of near visual impairment was 64.6%, with a confidence interval of 60.3%–68.4%. Near visual impairment was significantly associated with age >= 35 years (AOR: 4.90 at 95% CI: 3.15, 7.65), having prolonged years of teaching experience (AOR: 3.29 at 95% CI: 1.70, 4.62), having a history of ocular surgery (AOR: 1.96 at 95% CI: 1.10, 4.62), smokers (AOR: 2.21 at 95% CI: 1.22, 4.07), history of ocular trauma (AOR : 1.80 at 95%CI:1.11,3.18 and uncorrected refractive error (AOR:2.01 at 95%CI:1.13,4.03). Conclusion and recommendations: This study showed the prevalence of near vision impairment among school teachers was high, and it is not a problem of the presbyopia age group alone; it also happens at a young age. So teachers' ocular health should be well accommodated in the school's eye health.Keywords: Gondar, near visual impairment, school, teachers
Procedia PDF Downloads 1373329 Analysis of the Scattered Fields by Dielectric Sphere Inside Different Dielectric Mediums: The Case of the Source and Observation Point Is Reciprocal
Authors: Emi̇ne Avşar Aydin, Nezahat Günenç Tuncel, A. Hami̇t Serbest
Abstract:
The electromagnetic scattering from a canonical structure is an important issue in electromagnetic theory. In this study, the electromagnetic scattering from a dielectric sphere with oblique incidence is investigated. The incident field is considered as a plane wave with H polarized. The scattered and transmitted field expressions with unknown coefficients are written. The unknown coefficients are obtained by using exact boundary conditions. Then, the sphere is considered as having frequency dependent dielectric permittivity. The frequency dependence is shown by Cole-Cole model. The far scattered field expressions are found respect to different incidence angles in the 1-8 GHz frequency range. The observation point is the angular distance of pi from an incident wave. While an incident wave comes with a certain angle, observation point turns from 0 to 360 degrees. According to this, scattered field amplitude is maximum at the location of the incident wave, scattered field amplitude is minimum at the across incident wave. Also, the scattered fields are plotted versus frequency to show frequency-dependence explicitly. Graphics are shown for some incident angles compared with the Harrington's solution. Thus, the results are obtained faster and more reliable with reciprocal rotation. It is expected that when there is another sphere with different properties in the outer sphere, the presence and location of the sphere will be detected faster. In addition, this study leads to use for biomedical applications in the future.Keywords: scattering, dielectric sphere, oblique incidence, reciprocal rotation
Procedia PDF Downloads 2973328 Web-Based Cognitive Writing Instruction (WeCWI): A Theoretical-and-Pedagogical e-Framework for Language Development
Authors: Boon Yih Mah
Abstract:
Web-based Cognitive Writing Instruction (WeCWI)’s contribution towards language development can be divided into linguistic and non-linguistic perspectives. In linguistic perspective, WeCWI focuses on the literacy and language discoveries, while the cognitive and psychological discoveries are the hubs in non-linguistic perspective. In linguistic perspective, WeCWI draws attention to free reading and enterprises, which are supported by the language acquisition theories. Besides, the adoption of process genre approach as a hybrid guided writing approach fosters literacy development. Literacy and language developments are interconnected in the communication process; hence, WeCWI encourages meaningful discussion based on the interactionist theory that involves input, negotiation, output, and interactional feedback. Rooted in the e-learning interaction-based model, WeCWI promotes online discussion via synchronous and asynchronous communications, which allows interactions happened among the learners, instructor, and digital content. In non-linguistic perspective, WeCWI highlights on the contribution of reading, discussion, and writing towards cognitive development. Based on the inquiry models, learners’ critical thinking is fostered during information exploration process through interaction and questioning. Lastly, to lower writing anxiety, WeCWI develops the instructional tool with supportive features to facilitate the writing process. To bring a positive user experience to the learner, WeCWI aims to create the instructional tool with different interface designs based on two different types of perceptual learning style.Keywords: WeCWI, literacy discovery, language discovery, cognitive discovery, psychological discovery
Procedia PDF Downloads 5603327 An Intergenerational Study of Iranian Migrant Families in Australia: Exploring Language, Identity, and Acculturation
Authors: Alireza Fard Kashani
Abstract:
This study reports on the experiences and attitudes of six Iranian migrant families, from two groups of asylum seekers and skilled workers, with regard to their language, identity, and acculturation in Australia. The participants included first generation parents and 1.5-generation adolescents, who had lived in Australia for a minimum of three years. For this investigation, Mendoza’s (1984, 2016) acculturation model, as well as poststructuralist views of identity, were employed. The semi-structured interview results have highlighted that Iranian parents and adolescents face low degrees of intergenerational conflicts in most domains of their acculturation. However, the structural and lawful patterns in Australia have caused some internal conflicts for the parents, especially fathers (e.g., their power status within the family or their children’s freedom). Furthermore, while most participants reported ‘cultural eclecticism’ as their preferred acculturation orientation, female participants seemed to be more eclectic than their male counterparts who showed inclination towards keeping more aspects of their home culture. This finding, however, highlights a meaningful effort on the part of husbands that in order to make their married lives continue well in Australia they need to re-consider the traditional male-dominated customs they used to have in Iran. As for identity, not only the parents but also the adolescents proudly identified themselves as Persians. In addition, with respect to linguistic behaviour, almost all adolescents showed enthusiasm to retain the Persian language at home to be able to maintain contacts with their relatives and friends in Iran and to enjoy many other benefits the language may offer them in the future.Keywords: acculturation, asylum seekers, identity, intergenerational conflicts, language, skilled workers, 1.5 generation
Procedia PDF Downloads 2383326 Text Emotion Recognition by Multi-Head Attention based Bidirectional LSTM Utilizing Multi-Level Classification
Authors: Vishwanath Pethri Kamath, Jayantha Gowda Sarapanahalli, Vishal Mishra, Siddhesh Balwant Bandgar
Abstract:
Recognition of emotional information is essential in any form of communication. Growing HCI (Human-Computer Interaction) in recent times indicates the importance of understanding of emotions expressed and becomes crucial for improving the system or the interaction itself. In this research work, textual data for emotion recognition is used. The text being the least expressive amongst the multimodal resources poses various challenges such as contextual information and also sequential nature of the language construction. In this research work, the proposal is made for a neural architecture to resolve not less than 8 emotions from textual data sources derived from multiple datasets using google pre-trained word2vec word embeddings and a Multi-head attention-based bidirectional LSTM model with a one-vs-all Multi-Level Classification. The emotions targeted in this research are Anger, Disgust, Fear, Guilt, Joy, Sadness, Shame, and Surprise. Textual data from multiple datasets were used for this research work such as ISEAR, Go Emotions, Affect datasets for creating the emotions’ dataset. Data samples overlap or conflicts were considered with careful preprocessing. Our results show a significant improvement with the modeling architecture and as good as 10 points improvement in recognizing some emotions.Keywords: text emotion recognition, bidirectional LSTM, multi-head attention, multi-level classification, google word2vec word embeddings
Procedia PDF Downloads 1733325 Deposition of Size Segregated Particulate Matter in Human Respiratory Tract and Their Health Effects in Glass City Residents
Authors: Kalpana Rajouriya, Ajay Taneja
Abstract:
Particulates are ubiquitous in the air environment and cause serious threats to human beings, such as lung cancer, COPD, and Asthma. Particulates mainly arise from industrial effluent, vehicular emission, and other anthropogenic activities. In the glass industrial city Firozabad, real-time monitoring of size segregated Particulate Matter (PM) and black carbon was done by Aerosol Black Carbon Detector (ABCD) and GRIMM portable aerosol Spectrometer at two different sites in which one site is urban and another is rural. The average mass concentration of size segregated PM during the study period (March & April 2022) was recorded as PM10 (223.73 g/m⁻³), PM5.0 (44.955 g/m⁻³), PM2.5 (59.275 g/m⁻³), PM1.0 (33.02 g/m⁻³), PM0.5 (2.05 g/m⁻³), and PM0.25 (2.99 g/m⁻³). The highest concentration of BC was found in Urban due to the emissions from diesel engines and wood burning, while NO2 was highest at the rural sites. The average concentrations of PM10 (6.08 and 2.73 times) PM2.5 exceeded the NAAQS and WHO guidelines. Particulate Matter deposition and health risk assessment was done by MPPD and USEPA model to know about the particulate matter toxicity in industrial residents. Health risk assessment results showed that Children are most likely to be affected by exposure of PM10 and PM2.5 and may have various non-carcinogenic and carcinogenic diseases. Deposition results inferred that the sensitive exposed population, especially 9 years old children, have high PM deposition as well as visualization and may be at risk of developing health-related problems from exposure to size-segregated PM. They will be discussed during presentation.Keywords: particulate matter, black carbon, NO2, deposition of PM, health risk
Procedia PDF Downloads 643324 Optimization of Reliability Test Plans: Increase Wafer Fabrication Equipments Uptime
Authors: Swajeeth Panchangam, Arun Rajendran, Swarnim Gupta, Ahmed Zeouita
Abstract:
Semiconductor processing chambers tend to operate in controlled but aggressive operating conditions (chemistry, plasma, high temperature etc.) Owing to this, the design of this equipment requires developing robust and reliable hardware and software. Any equipment downtime due to reliability issues can have cost implications both for customers in terms of tool downtime (reduced throughput) and for equipment manufacturers in terms of high warranty costs and customer trust deficit. A thorough reliability assessment of critical parts and a plan for preventive maintenance/replacement schedules need to be done before tool shipment. This helps to save significant warranty costs and tool downtimes in the field. However, designing a proper reliability test plan to accurately demonstrate reliability targets with proper sample size and test duration is quite challenging. This is mainly because components can fail in different failure modes that fit into different Weibull beta value distributions. Without apriori Weibull beta of a failure mode under consideration, it always leads to over/under utilization of resources, which eventually end up in false positives or false negatives estimates. This paper proposes a methodology to design a reliability test plan with optimal model size/duration/both (independent of apriori Weibull beta). This methodology can be used in demonstration tests and can be extended to accelerated life tests to further decrease sample size/test duration.Keywords: reliability, stochastics, preventive maintenance
Procedia PDF Downloads 123323 Simulation of Utility Accrual Scheduling and Recovery Algorithm in Multiprocessor Environment
Authors: A. Idawaty, O. Mohamed, A. Z. Zuriati
Abstract:
This paper presents the development of an event based Discrete Event Simulation (DES) for a recovery algorithm known Backward Recovery Global Preemptive Utility Accrual Scheduling (BR_GPUAS). This algorithm implements the Backward Recovery (BR) mechanism as a fault recovery solution under the existing Time/Utility Function/ Utility Accrual (TUF/UA) scheduling domain for multiprocessor environment. The BR mechanism attempts to take the faulty tasks back to its initial safe state and then proceeds to re-execute the affected section of the faulty tasks to enable recovery. Considering that faults may occur in the components of any system; a fault tolerance system that can nullify the erroneous effect is necessary to be developed. Current TUF/UA scheduling algorithm uses the abortion recovery mechanism and it simply aborts the erroneous task as their fault recovery solution. None of the existing algorithm in TUF/UA scheduling domain in multiprocessor scheduling environment have considered the transient fault and implement the BR mechanism as a fault recovery mechanism to nullify the erroneous effect and solve the recovery problem in this domain. The developed BR_GPUAS simulator has derived the set of parameter, events and performance metrics according to a detailed analysis of the base model. Simulation results revealed that BR_GPUAS algorithm can saved almost 20-30% of the accumulated utilities making it reliable and efficient for the real-time application in the multiprocessor scheduling environment.Keywords: real-time system (RTS), time utility function/ utility accrual (TUF/UA) scheduling, backward recovery mechanism, multiprocessor, discrete event simulation (DES)
Procedia PDF Downloads 3043322 Sustainable Design for Building Envelope in Hot Climates: A Case Study for the Role of the Dome as a Component of an Envelope in Heat Exchange
Authors: Akeel Noori Almulla Hwaish
Abstract:
Architectural design is influenced by the actual thermal behaviour of building components, and this in turn depends not only on their steady and periodic thermal characteristics, but also on exposure effects, orientation, surface colour, and climatic fluctuations at the given location. Design data and environmental parameters should be produced in an accurate way for specified locations, so that architects and engineers can confidently apply them in their design calculations that enable precise evaluation of the influence of various parameters relating to each component of the envelope, which indicates overall thermal performance of building. The present paper will be carried out with an objective of thermal behaviour assessment and characteristics of the opaque and transparent parts of one of the very unique components used as a symbolic distinguished element of building envelope, its thermal behaviour under the impact of solar temperatures, and its role in heat exchange related to a specific U-value of specified construction materials alternatives. The research method will consider the specified Hot-Dry weather and new mosque in Baghdad, Iraq as a case study. Also, data will be presented in light of the criteria of indoor thermal comfort in terms of design parameters and thermal assessment for a“model dome”. Design alternatives and considerations of energy conservation, will be discussed as well using comparative computer simulations. Findings will be incorporated to outline the conclusions clarifying the important role of the dome in heat exchange of the whole building envelope for approaching an indoor thermal comfort level and further research in the future.Keywords: building envelope, sustainable design, dome impact, hot-climates, heat exchange
Procedia PDF Downloads 4733321 Seroepidemiology of Q Fever among Companion Dogs in Fars Province, South of Iran
Authors: Atefeh Esmailnejad, Mohammad Abbaszadeh Hasiri
Abstract:
Coxiella burnetii is a gram-negative obligatory intracellular bacterium that causes Q fever, a significant zoonotic disease. Sheep, cattle, and goats are the most commonly reported reservoirs for the bacteria, but infected cats and dogs have also been implicated in the transmission of the disease to human. The aim of present study was to investigate the presence of antibodies against Coxiella burnetii among companion dogs in Fars province, South of Iran. A total of 181 blood samples were collected from asymptomatic dogs, mostly referred to Veterinary Hospital of Shiraz University for regular vaccination. The IgG antibody detection against Coxiella burnetii was made by indirect Enzyme-linked Immunosorbent Assay (ELISA), employing phase I and II Coxiella burnetii antigens. A logistic regression model was developed to analyze multiple risk factors associated with seropositivity. An overall seropositivity of 7.7% (n=14) was observed. Prevalence was significantly higher in adult dogs above five years (18.18 %) compared with dogs between 1 and five years (7.86 %) and less than one year (6.17%) (P=0.043). Prevalence was also higher in male dogs (11.21 %) than in female (2.7 %) (P=0.035). There were no significant differences in the prevalence of positive cases and breed, type of housing, type of food and exposure to other farm animals (P>0.05). The results of this study showed the presence of Coxiella burnetii infection among the companion dogs population in Fars province. To our knowledge, this is the first study regarding Q fever in dogs carried out in Iran. In areas like Iran, where human cases of Q fever are not common or remain unreported, the public health implications of Q fever seroprevalence in dogs are quite significant.Keywords: Coxiella burnetii, dog, Iran, Q fever
Procedia PDF Downloads 3103320 Calculation of Solar Ultraviolet Irradiant Exposure of the Cornea through Sunglasses
Authors: Mauro Masili, Fernanda O. Duarte, Liliane Ventura
Abstract:
Ultraviolet (UV) radiation is electromagnetic waves from 100 – 400 nm wavelength. The World Health Organization and the International Commission on Non-Ionizing Radiation Protection (ICNIRP) recommend guidelines on the exposure of the eyes to UV radiation because it is correlated to ophthalmic diseases. Those exposure limits for an 8-h period are 1) UV radiant exposure should not exceed 30 J/m2 when irradiance is spectrally weighted using an actinic action spectrum; 2) unweighted radiant exposure in the UV-A spectral region 315 – 400 nm should not exceed 10 kJ/m2. Sunglasses play an important role in preventing eye injuries related to Sun exposure. We have calculated the direct and diffuse solar UV irradiance in a geometry that refers to an individual wearing a sunglass, in which the solar rays strike on a vertical surface. The diffuse rays are those scattered from the atmosphere and from the local environment. The calculations used the open-source SMARTS2 spectral model, in which we assumed a clear sky condition, aside from information about site location, date, time, ozone column, aerosols, and turbidity. In addition, we measured the spectral transmittance of a typical sunglasses lens and the global solar irradiance was weighted with the spectral transmittance profile of the lens. The radiant exposure incident on the eye’s surface was calculated in the UV and UV-A ranges following the ICNIRP’s recommendations for each day of the year. The tested lens failed the UV-A safe limit, while the UV limit failed to comply with this limit after the aging process. Hence, the ICNIRP safe limits should be considered in the standards to increase the protection against UV radiation on the eye.Keywords: ICNIRP safe limits, ISO-12312-1, sunglasses, ultraviolet radiation
Procedia PDF Downloads 883319 Preparation of Magnetothermally Responsive Polymer Multilayer Films for Controlled Release Applications from Surfaces
Authors: Eda Cagli, Irem Erel Goktepe
Abstract:
Externally triggered and effective release of therapeutics from polymer nanoplatforms is one of the key issues in cancer treatment. In this study, we aim to prepare polymer multilayer films which are stable at physiological conditions (little or no drug release) but release drug molecules at acidic pH and via application of AC magnetic field. First, novel stimuli responsive diblock copolymers composed of pH- and temperature-responsive blocks were synthesized. Then, block copolymer micelles with pH-responsive core and temperature responsive coronae will be obtained via pH-induced self-assembly of these block copolymers in aqueous environment. A model anticancer drug, e.g. Doxorubicin will be loaded in the micellar cores. Second, superparamagnetic nanoparticles will be synthesized. Magnetic nanoparticles and drug loaded block copolymer micelles will be used as building blocks to construct the multilayers. To mimic the acidic nature of the tumor tissues, Doxorubicin release from the micellar cores will be induced at acidic conditions. Moreover, Doxorubicin release from the multilayers will be facilitated via magnetothermal trigger. Application of AC magnetic field will induce the heating of magnetic nanoparticles resulting in an increase in the temperature of the polymer platform. This increase in temperature is expected to trigger conformational changes on the temperature-responsive micelle coronae and facilitate the release of Doxorubicin from the surface. Such polymer platform may find use in biomedical applications.Keywords: layer-by-layer films, magnetothermal trigger, smart polymers, stimuli responsive
Procedia PDF Downloads 3623318 Observation of the Orthodontic Tooth's Long-Term Movement Using Stereovision System
Authors: Hao-Yuan Tseng, Chuan-Yang Chang, Ying-Hui Chen, Sheng-Che Chen, Chih-Han Chang
Abstract:
Orthodontic tooth treatment has demonstrated a high success rate in clinical studies. It has been agreed upon that orthodontic tooth movement is based on the ability of surrounding bone and periodontal ligament (PDL) to react to a mechanical stimulus with remodeling processes. However, the mechanism of the tooth movement is still unclear. Recent studies focus on the simple principle compression-tension theory while rare studies directly measure tooth movement. Therefore, tracking tooth movement information during orthodontic treatment is very important in clinical practice. The aim of this study is to investigate the mechanism responses of the tooth movement during the orthodontic treatments. A stereovision system applied to track the tooth movement of the patient with the stamp brackets. The system was established by two cameras with their relative position calibrate. And the orthodontic force measured by 3D printing model with the six-axis load cell to determine the initial force application. The result shows that the stereovision system accuracy revealed the measurement presents a maximum error less than 2%. For the study on patient tracking, the incisor moved about 0.9 mm during 60 days tracking, and half of movement occurred in the first few hours. After removing the orthodontic force in 100 hours, the distance between before and after position incisor tooth decrease 0.5 mm consisted with the release of the phenomenon. Using the stereovision system can accurately locate the three-dimensional position of the teeth and superposition of 3D coordinate system for all the data to integrate the complex tooth movement.Keywords: orthodontic treatment, tooth movement, stereovision system, long-term tracking
Procedia PDF Downloads 420