Search results for: modified simplex algorithm
615 Self-Sensing Concrete Nanocomposites for Smart Structures
Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi
Abstract:
In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring
Procedia PDF Downloads 229614 Action Potential of Lateral Geniculate Neurons at Low Threshold Currents: Simulation Study
Authors: Faris Tarlochan, Siva Mahesh Tangutooru
Abstract:
Lateral Geniculate Nucleus (LGN) is the relay center in the visual pathway as it receives most of the input information from retinal ganglion cells (RGC) and sends to visual cortex. Low threshold calcium currents (IT) at the membrane are the unique indicator to characterize this firing functionality of the LGN neurons gained by the RGC input. According to the LGN functional requirements such as functional mapping of RGC to LGN, the morphologies of the LGN neurons were developed. During the neurological disorders like glaucoma, the mapping between RGC and LGN is disconnected and hence stimulating LGN electrically using deep brain electrodes can restore the functionalities of LGN. A computational model was developed for simulating the LGN neurons with three predominant morphologies, each representing different functional mapping of RGC to LGN. The firings of action potentials at LGN neuron due to IT were characterized by varying the stimulation parameters, morphological parameters and orientation. A wide range of stimulation parameters (stimulus amplitude, duration and frequency) represents the various strengths of the electrical stimulation with different morphological parameters (soma size, dendrites size and structure). The orientation (0-1800) of LGN neuron with respect to the stimulating electrode represents the angle at which the extracellular deep brain stimulation towards LGN neuron is performed. A reduced dendrite structure was used in the model using Bush–Sejnowski algorithm to decrease the computational time while conserving its input resistance and total surface area. The major finding is that an input potential of 0.4 V is required to produce the action potential in the LGN neuron which is placed at 100 µm distance from the electrode. From this study, it can be concluded that the neuroprostheses under design would need to consider the capability of inducing at least 0.4V to produce action potentials in LGN.Keywords: Lateral Geniculate Nucleus, visual cortex, finite element, glaucoma, neuroprostheses
Procedia PDF Downloads 279613 Simplified INS\GPS Integration Algorithm in Land Vehicle Navigation
Authors: Othman Maklouf, Abdunnaser Tresh
Abstract:
Land vehicle navigation is subject of great interest today. Global Positioning System (GPS) is the main navigation system for positioning in such systems. GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation (INS) is the implementation of inertial sensors to determine the position and orientation of a vehicle. The availability of low-cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop INS using an inertial measurement unit (IMU). INS has unbounded error growth since the error accumulates at each step. Usually, GPS and INS are integrated with a loosely coupled scheme. With the development of low-cost, MEMS inertial sensors and GPS technology, integrated INS/GPS systems are beginning to meet the growing demands of lower cost, smaller size, and seamless navigation solutions for land vehicles. Although MEMS inertial sensors are very inexpensive compared to conventional sensors, their cost (especially MEMS gyros) is still not acceptable for many low-end civilian applications (for example, commercial car navigation or personal location systems). An efficient way to reduce the expense of these systems is to reduce the number of gyros and accelerometers, therefore, to use a partial IMU (ParIMU) configuration. For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a field experiment for a low-cost strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach, we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost IMU (Inertial Measurement Unit) and because of the relatively small area of the trajectory.Keywords: GPS, IMU, Kalman filter, materials engineering
Procedia PDF Downloads 422612 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores
Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan
Abstract:
Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics
Procedia PDF Downloads 130611 A Statistical Approach to Predict and Classify the Commercial Hatchability of Chickens Using Extrinsic Parameters of Breeders and Eggs
Authors: M. S. Wickramarachchi, L. S. Nawarathna, C. M. B. Dematawewa
Abstract:
Hatchery performance is critical for the profitability of poultry breeder operations. Some extrinsic parameters of eggs and breeders cause to increase or decrease the hatchability. This study aims to identify the affecting extrinsic parameters on the commercial hatchability of local chicken's eggs and determine the most efficient classification model with a hatchability rate greater than 90%. In this study, seven extrinsic parameters were considered: egg weight, moisture loss, breeders age, number of fertilised eggs, shell width, shell length, and shell thickness. Multiple linear regression was performed to determine the most influencing variable on hatchability. First, the correlation between each parameter and hatchability were checked. Then a multiple regression model was developed, and the accuracy of the fitted model was evaluated. Linear Discriminant Analysis (LDA), Classification and Regression Trees (CART), k-Nearest Neighbors (kNN), Support Vector Machines (SVM) with a linear kernel, and Random Forest (RF) algorithms were applied to classify the hatchability. This grouping process was conducted using binary classification techniques. Hatchability was negatively correlated with egg weight, breeders' age, shell width, shell length, and positive correlations were identified with moisture loss, number of fertilised eggs, and shell thickness. Multiple linear regression models were more accurate than single linear models regarding the highest coefficient of determination (R²) with 94% and minimum AIC and BIC values. According to the classification results, RF, CART, and kNN had performed the highest accuracy values 0.99, 0.975, and 0.972, respectively, for the commercial hatchery process. Therefore, the RF is the most appropriate machine learning algorithm for classifying the breeder outcomes, which are economically profitable or not, in a commercial hatchery.Keywords: classification models, egg weight, fertilised eggs, multiple linear regression
Procedia PDF Downloads 88610 A Machine Learning Approach for Detecting and Locating Hardware Trojans
Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He
Abstract:
The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.Keywords: hardware trojans, physical properties, machine learning, hardware security
Procedia PDF Downloads 148609 Compression-Extrusion Test to Assess Texture of Thickened Liquids for Dysphagia
Authors: Jesus Salmeron, Carmen De Vega, Maria Soledad Vicente, Mireia Olabarria, Olaia Martinez
Abstract:
Dysphagia or difficulty in swallowing affects mostly elder people: 56-78% of the institutionalized and 44% of the hospitalized. Liquid food thickening is a necessary measure in this situation because it reduces the risk of penetration-aspiration. Until now, and as proposed by the American Dietetic Association in 2002, possible consistencies have been categorized in three groups attending to their viscosity: nectar (50-350 mPa•s), honey (350-1750 mPa•s) and pudding (>1750 mPa•s). The adequate viscosity level should be identified for every patient, according to her/his impairment. Nevertheless, a systematic review on dysphagia diet performed recently indicated that there is no evidence to suggest that there is any transition of clinical relevance between the three levels proposed. It was also stated that other physical properties of the bolus (slipperiness, density or cohesiveness, among others) could influence swallowing in affected patients and could contribute to the amount of remaining residue. Texture parameters need to be evaluated as possible alternative to viscosity. The aim of this study was to evaluate the instrumental extrusion-compression test as a possible tool to characterize changes along time in water thickened with various products and in the three theoretical consistencies. Six commercial thickeners were used: NM® (NM), Multi-thick® (M), Nutilis Powder® (Nut), Resource® (R), Thick&Easy® (TE) and Vegenat® (V). All of them with a modified starch base. Only one of them, Nut, also had a 6,4% of gum (guar, tara and xanthan). They were prepared as indicated in the instructions of each product and dispensing the correspondent amount for nectar, honey and pudding consistencies in 300 mL of tap water at 18ºC-20ºC. The mixture was stirred for about 30 s. Once it was homogeneously spread, it was dispensed in 30 mL plastic glasses; always to the same height. Each of these glasses was used as a measuring point. Viscosity was measured using a rotational viscometer (ST-2001, Selecta, Barcelona). Extrusion-compression test was performed using a TA.XT2i texture analyzer (Stable Micro Systems, UK) with a 25 mm diameter cylindrical probe (SMSP/25). Penetration distance was set at 10 mm and a speed of 3 mm/s. Measurements were made at 1, 5, 10, 20, 30, 40, 50 and 60 minutes from the moment samples were mixed. From the force (g)–time (s) curves obtained in the instrumental assays, maximum force peak (F) was chosen a reference parameter. Viscosity (mPa•s) and F (g) showed to be highly correlated and had similar development along time, following time-dependent quadratic models. It was possible to predict viscosity using F as an independent variable, as they were linearly correlated. In conclusion, compression-extrusion test could be an alternative and a useful tool to assess physical characteristics of thickened liquids.Keywords: compression-extrusion test, dysphagia, texture analyzer, thickener
Procedia PDF Downloads 369608 Modelling the Behavior of Commercial and Test Textiles against Laundering Process by Statistical Assessment of Their Performance
Authors: M. H. Arslan, U. K. Sahin, H. Acikgoz-Tufan, I. Gocek, I. Erdem
Abstract:
Various exterior factors have perpetual effects on textile materials during wear, use and laundering in everyday life. In accordance with their frequency of use, textile materials are required to be laundered at certain intervals. The medium in which the laundering process takes place have inevitable detrimental physical and chemical effects on textile materials caused by the unique parameters of the process inherently existing. Connatural structures of various textile materials result in many different physical, chemical and mechanical characteristics. Because of their specific structures, these materials have different behaviors against several exterior factors. By modeling the behavior of commercial and test textiles as group-wise against laundering process, it is possible to disclose the relation in between these two groups of materials, which will lead to better understanding of their behaviors in terms of similarities and differences against the washing parameters of the laundering. Thus, the goal of the current research is to examine the behavior of two groups of textile materials as commercial textiles and as test textiles towards the main washing machine parameters during laundering process such as temperature, load quantity, mechanical action and level of water amount by concentrating on shrinkage, pilling, sewing defects, collar abrasion, the other defects other than sewing, whitening and overall properties of textiles. In this study, cotton fabrics were preferred as commercial textiles due to the fact that garments made of cotton are the most demanded products in the market by the textile consumers in daily life. Full factorial experimental set-up was used to design the experimental procedure. All profiles always including all of the commercial and the test textiles were laundered for 20 cycles by commercial home laundering machine to investigate the effects of the chosen parameters. For the laundering process, a modified version of ‘‘IEC 60456 Test Method’’ was utilized. The amount of detergent was altered as 0.5% gram per liter depending on varying load quantity levels. Datacolor 650®, EMPA Photographic Standards for Pilling Test and visual examination were utilized to test and characterize the textiles. Furthermore, in the current study the relation in between commercial and test textiles in terms of their performance was deeply investigated by the help of statistical analysis performed by MINITAB® package program modeling their behavior against the parameters of the laundering process. In the experimental work, the behaviors of both groups of textiles towards washing machine parameters were visually and quantitatively assessed in dry state.Keywords: behavior against washing machine parameters, performance evaluation of textiles, statistical analysis, commercial and test textiles
Procedia PDF Downloads 362607 Cognitive Performance and Everyday Functionality in Healthy Greek Seniors
Authors: George Pavlidis, Ana Vivas
Abstract:
The demographic change into an aging population has stimulated the examination of seniors’ mental health and ability to live independently. The corresponding literature depicts the relation between cognitive decline and everyday functionality with aging, focusing largely in individuals that are reaching or have bridged the threshold of various forms of neuropathology and disability. In this context, recent meta-analysis depicts a moderate relation between cognitive performance and everyday functionality in AD sufferers. However, there has not been an analogous effort for the examination of this relation in the healthy spectrum of aging (i.e, in samples that are not challenged from a neurodegenerative disease). There is a consensus that the assessment tools designed to detect neuropathology with those that assess cognitive performance in healthy adults are distinct, thus their universal use in cognitively challenged and in healthy adults is not always valid. The same accounts for the assessment of everyday functionality. In addition, it is argued that everyday functionality should be examined with cultural adjusted assessment tools, since many vital everyday tasks are heterotypical among distinct cultures. Therefore, this study was set out to examine the relation between cognitive performance and everyday functionality a) in the healthy spectrum of aging and b) by adjusting the everyday functionality tools EPT and OTDL-R in the Greek cultural context. In Greece, 107 cognitively healthy seniors ( Mage = 62.24) completed a battery of neuropsychological tests and everyday functionality tests. Both were carefully chosen to be sensitive in fluctuations of performance in the healthy spectrum of cognitive performance and everyday functionality. The everyday functionality assessment tools were modified to reflect the local cultural context (i.e., EPT-G and OTDL-G). The results depicted that performance in all everyday functionality measures decline with age (.197 < r > .509). Statistically significant correlations emerged between cognitive performance and everyday functionality assessments that range from r =0.202 to r=0.510. A series of independent regression analysis including the scores of cognitive assessments has yield statistical significant models that explained 20.9 < AR2 > 32.4 of the variance in everyday functionality scored indexes. All everyday functionality measures were independently predicted by the TMT B-A index, and indicator of executive function. Stepwise regression analyses depicted that TMT B-A and age were statistically significant independent predictors of EPT-G and OTDL-G. It was concluded that everyday functionality is declining with age and that cognitive performance and everyday functional may be related in the healthy spectrum of aging. Age seems not to be the sole contributing factor in everyday functionality decline, rather executive control as well. Moreover, it was concluded that the EPT-G and OTDL-G are valuable tools to assess everyday functionality in Greek seniors that are not cognitively challenged, especially for research purposes. Future research should examine the contributing factors of a better cognitive vitality especially in executive control, as vital for the maintenance of independent living capacity with aging.Keywords: cognition, everyday functionality, aging, cognitive decline, healthy aging, Greece
Procedia PDF Downloads 525606 Hyper Parameter Optimization of Deep Convolutional Neural Networks for Pavement Distress Classification
Authors: Oumaima Khlifati, Khadija Baba
Abstract:
Pavement distress is the main factor responsible for the deterioration of road structure durability, damage vehicles, and driver comfort. Transportation agencies spend a high proportion of their funds on pavement monitoring and maintenance. The auscultation of pavement distress was based on the manual survey, which was extremely time consuming, labor intensive, and required domain expertise. Therefore, the automatic distress detection is needed to reduce the cost of manual inspection and avoid more serious damage by implementing the appropriate remediation actions at the right time. Inspired by recent deep learning applications, this paper proposes an algorithm for automatic road distress detection and classification using on the Deep Convolutional Neural Network (DCNN). In this study, the types of pavement distress are classified as transverse or longitudinal cracking, alligator, pothole, and intact pavement. The dataset used in this work is composed of public asphalt pavement images. In order to learn the structure of the different type of distress, the DCNN models are trained and tested as a multi-label classification task. In addition, to get the highest accuracy for our model, we adjust the structural optimization hyper parameters such as the number of convolutions and max pooling, filers, size of filters, loss functions, activation functions, and optimizer and fine-tuning hyper parameters that conclude batch size and learning rate. The optimization of the model is executed by checking all feasible combinations and selecting the best performing one. The model, after being optimized, performance metrics is calculated, which describe the training and validation accuracies, precision, recall, and F1 score.Keywords: distress pavement, hyperparameters, automatic classification, deep learning
Procedia PDF Downloads 94605 Analyzing Nonsimilar Convective Heat Transfer in Copper/Alumina Nanofluid with Magnetic Field and Thermal Radiations
Authors: Abdulmohsen Alruwaili
Abstract:
A partial differential system featuring momentum and energy balance is often used to describe simulations of flow initiation and thermal shifting in boundary layers. The buoyancy force in terms of temperature is factored in the momentum balance equation. Buoyancy force causes the flow quantity to fluctuate along the streamwise direction 𝑋; therefore, the problem can be, to our best knowledge, analyzed through nonsimilar modeling. In this analysis, a nonsimilar model is evolved for radiative mixed convection of a magnetized power-law nanoliquid flow on top of a vertical plate installed in a stationary fluid. The upward linear stretching initiated the flow in the vertical direction. Assuming nanofluids are composite of copper (Cu) and alumina (Al₂O₃) nanoparticles, the viscous dissipation in this case is negligible. The nonsimilar system is dealt with analytically by local nonsimilarity (LNS) via numerical algorithm bvp4c. Surface temperature and flow field are shown visually in relation to factors like mixed convection, magnetic field strength, nanoparticle volume fraction, radiation parameters, and Prandtl number. The repercussions of magnetic and mixed convection parameters on the rate of energy transfer and friction coefficient are represented in tabular forms. The results obtained are compared to the published literature. It is found that the existence of nanoparticles significantly improves the temperature profile of considered nanoliquid. It is also observed that when the estimates of the magnetic parameter increase, the velocity profile decreases. Enhancement in nanoparticle concentration and mixed convection parameter improves the velocity profile.Keywords: nanofluid, power law model, mixed convection, thermal radiation
Procedia PDF Downloads 35604 Solving LWE by Pregressive Pumps and Its Optimization
Authors: Leizhang Wang, Baocang Wang
Abstract:
General Sieve Kernel (G6K) is considered as currently the fastest algorithm for the shortest vector problem (SVP) and record holder of open SVP challenge. We study the lattice basis quality improvement effects of the Workout proposed in G6K, which is composed of a series of pumps to solve SVP. Firstly, we use a low-dimensional pump output basis to propose a predictor to predict the quality of high-dimensional Pumps output basis. Both theoretical analysis and experimental tests are performed to illustrate that it is more computationally expensive to solve the LWE problems by using a G6K default SVP solving strategy (Workout) than these lattice reduction algorithms (e.g. BKZ 2.0, Progressive BKZ, Pump, and Jump BKZ) with sieving as their SVP oracle. Secondly, the default Workout in G6K is optimized to achieve a stronger reduction and lower computational cost. Thirdly, we combine the optimized Workout and the Pump output basis quality predictor to further reduce the computational cost by optimizing LWE instances selection strategy. In fact, we can solve the TU LWE challenge (n = 65, q = 4225, = 0:005) 13.6 times faster than the G6K default Workout. Fourthly, we consider a combined two-stage (Preprocessing by BKZ- and a big Pump) LWE solving strategy. Both stages use dimension for free technology to give new theoretical security estimations of several LWE-based cryptographic schemes. The security estimations show that the securities of these schemes with the conservative Newhope’s core-SVP model are somewhat overestimated. In addition, in the case of LAC scheme, LWE instances selection strategy can be optimized to further improve the LWE-solving efficiency even by 15% and 57%. Finally, some experiments are implemented to examine the effects of our strategies on the Normal Form LWE problems, and the results demonstrate that the combined strategy is four times faster than that of Newhope.Keywords: LWE, G6K, pump estimator, LWE instances selection strategy, dimension for free
Procedia PDF Downloads 60603 3D Geological Modeling and Engineering Geological Characterization of Shallow Subsurface Soil and Rock of Addis Ababa, Ethiopia
Authors: Biruk Wolde, Atalay Ayele, Yonatan Garkabo, Trufat Hailmariam, Zemenu Germewu
Abstract:
A comprehensive three-dimensional (3D) geological modeling and engineering geological characterization of shallow subsurface soils and rocks are essential for a wide range of geotechnical and seismological engineering applications, particularly in urban environments. The spatial distribution and geological variation of the shallow subsurface of Addis Ababa city have not been studied so far in terms of geological and geotechnical modeling. This study aims at the construction of a 3D geological model, as well as provides awareness into the engineering geological characteristics of shallow subsurface soil and rock of Addis Ababa city. The 3D geological model was constructed by using more than 1500 geotechnical boreholes, well-drilling data, and geological maps. A well-known geostatistical kriging 3D interpolation algorithm was applied to visualize the spatial distribution and geological variation of the shallow subsurface. Due to the complex nature of geological formations, vertical and lateral variation of the geological profiles horizons-solid command has been selected via the Groundwater Modelling System (GMS) graphical user interface software. For the engineering geological characterization of typical soils and rocks, both index and engineering laboratory tests have been used. The geotechnical properties of soil and rocks vary from place to place due to the uneven nature of subsurface formations observed in the study areas. The constructed model ascertains the thickness, extent, and 3D distribution of the important geological units of the city. This study is the first comprehensive research work on 3D geological modeling and subsurface characterization of soils and rocks in Addis Ababa city, and the outcomes will be important for further future research on subsurface conditions in the city. Furthermore, these findings provide a reference for developing a geo-database for the city.Keywords: 3d geological modeling, addis ababa, engineering geology, geostatistics, horizons-solid
Procedia PDF Downloads 101602 Understanding the Semantic Network of Tourism Studies in Taiwan by Using Bibliometrics Analysis
Authors: Chun-Min Lin, Yuh-Jen Wu, Ching-Ting Chung
Abstract:
The formulation of tourism policies requires objective academic research and evidence as support, especially research from local academia. Taiwan is a small island, and its economic growth relies heavily on tourism revenue. Taiwanese government has been devoting to the promotion of the tourism industry over the past few decades. Scientific research outcomes by Taiwanese scholars may and will help lay the foundations for drafting future tourism policy by the government. In this study, a total of 120 full journal articles published between 2008 and 2016 from the Journal of Tourism and Leisure Studies (JTSL) were examined to explore the scientific research trend of tourism study in Taiwan. JTSL is one of the most important Taiwanese journals in the tourism discipline which focuses on tourism-related issues and uses traditional Chinese as the study language. The method of co-word analysis from bibliometrics approaches was employed for semantic analysis in this study. When analyzing Chinese words and phrases, word segmentation analysis is a crucial step. It must be carried out initially and precisely in order to obtain meaningful word or word chunks for further frequency calculation. A word segmentation system basing on N-gram algorithm was developed in this study to conduct semantic analysis, and 100 groups of meaningful phrases with the highest recurrent rates were located. Subsequently, co-word analysis was employed for semantic classification. The results showed that the themes of tourism research in Taiwan in recent years cover the scope of tourism education, environmental protection, hotel management, information technology, and senior tourism. The results can give insight on the related issues and serve as a reference for tourism-related policy making and follow-up research.Keywords: bibliometrics, co-word analysis, word segmentation, tourism research, policy
Procedia PDF Downloads 229601 Effect of Pulsed Electrical Field on the Mechanical Properties of Raw, Blanched and Fried Potato Strips
Authors: Maria Botero-Uribe, Melissa Fitzgerald, Robert Gilbert, Kim Bryceson, Jocelyn Midgley
Abstract:
French fry manufacturing involves a series of processes in which structural properties of potatoes are modified to produce crispy french fries which consumers enjoy. In addition to the traditional french fry manufacturing process, the industry is applying a relatively new process called pulsed electrical field (PEF) to the whole potatoes. There is a wealth of information on the technical treatment conditions of PEF, however, there is a lack of information about its effect on the structural properties that affect texture and its synergistic interactions with the other manufacturing steps of french fry production. The effect of PEF on starch gelatinisation properties of Russet Burbank potato was measured using a Differential Scanning Calorimeter. Cation content (K+, Ca2+ and Mg2+) was determined by inductively coupled plasma optical emission spectrophotometry. Firmness, and toughness of raw and blanched potatoes were determined in an uniaxial compression test. Moisture content was determined in a vacuum oven and oil content was measured using the soxhlet system with hexane. The final texture of the french fries – crispness - was determined using a three bend point test. Triangle tests were conducted to determine if consumers were able to perceive sensory differences between French fries that were PEF treated and those without treatment. The concentration of K+, Ca2+ and Mg2+ decreased significantly in the raw potatoes after the PEF treatment. The PEF treatment significantly increased modulus of elasticity, compression strain, compression force and toughness in the raw potato. The PEF-treated raw potato were firmer and stiffer, and its structure integrity held together longer, resisted higher force before fracture and stretched further than the untreated ones. The strain stress relationship exhibited by the PEF-treated raw potato could be due to an increase in the permeability of the plasmalema and tonoplasm allowing Ca2+ and Mg2+ cations to reach the cell wall and middle lamella, and be available for cross linking with the pectin molecule. The PEF-treated raw potato exhibited a slightly higher onset gelatinisation temperatures, similar peak temperatures and lower gelatinisation ranges than the untreated raw potatoes. The final moisture content of the french fries was not significantly affected by the PEF treatment. Oil content in the PEF- treated potatoes was lower than the untreated french fries, however, not statistically significant at 5 %. The PEF treatment did not have an overall significant effect on french fry crispness (modulus of elasticity), flexure stress or strain. The triangle tests show that most consumers could not detect a difference between French fries that received a PEF treatment from those that did not.Keywords: french fries, mechanical properties, PEF, potatoes
Procedia PDF Downloads 236600 Design and Development of High Strength Aluminium Alloy from Recycled 7xxx-Series Material Using Bayesian Optimisation
Authors: Alireza Vahid, Santu Rana, Sunil Gupta, Pratibha Vellanki, Svetha Venkatesh, Thomas Dorin
Abstract:
Aluminum is the preferred material for lightweight applications and its alloys are constantly improving. The high strength 7xxx alloys have been extensively used for structural components in aerospace and automobile industries for the past 50 years. In the next decade, a great number of airplanes will be retired, providing an obvious source of valuable used metals and great demand for cost-effective methods to re-use these alloys. The design of proper aerospace alloys is primarily based on optimizing strength and ductility, both of which can be improved by controlling the additional alloying elements as well as heat treatment conditions. In this project, we explore the design of high-performance alloys with 7xxx as a base material. These designed alloys have to be optimized and improved to compare with modern 7xxx-series alloys and to remain competitive for aircraft manufacturing. Aerospace alloys are extremely complex with multiple alloying elements and numerous processing steps making optimization often intensive and costly. In the present study, we used Bayesian optimization algorithm, a well-known adaptive design strategy, to optimize this multi-variable system. An Al alloy was proposed and the relevant heat treatment schedules were optimized, using the tensile yield strength as the output to maximize. The designed alloy has a maximum yield strength and ultimate tensile strength of more than 730 and 760 MPa, respectively, and is thus comparable to the modern high strength 7xxx-series alloys. The microstructure of this alloy is characterized by electron microscopy, indicating that the increased strength of the alloy is due to the presence of a high number density of refined precipitates.Keywords: aluminum alloys, Bayesian optimization, heat treatment, tensile properties
Procedia PDF Downloads 120599 FlameCens: Visualization of Expressive Deviations in Music Performance
Authors: Y. Trantafyllou, C. Alexandraki
Abstract:
Music interpretation accounts to the way musicians shape their performance by deliberately deviating from composers’ intentions, which are commonly communicated via some form of music transcription, such as a music score. For transcribed and non-improvised music, music expression is manifested by introducing subtle deviations in tempo, dynamics and articulation during the evolution of performance. This paper presents an application, named FlameCens, which, given two recordings of the same piece of music, presumably performed by different musicians, allow visualising deviations in tempo and dynamics during playback. The application may also compare a certain performance to the music score of that piece (i.e. MIDI file), which may be thought of as an expression-neutral representation of that piece, hence depicting the expressive queues employed by certain performers. FlameCens uses the Dynamic Time Warping algorithm to compare two audio sequences, based on CENS (Chroma Energy distribution Normalized Statistics) audio features. Expressive deviations are illustrated in a moving flame, which is generated by an animation of particles. The length of the flame is mapped to deviations in dynamics, while the slope of the flame is mapped to tempo deviations so that faster tempo changes the slope to the right and slower tempo changes the slope to the left. Constant slope signifies no tempo deviation. The detected deviations in tempo and dynamics can be additionally recorded in a text file, which allows for offline investigation. Moreover, in the case of monophonic music, the color of particles is used to convey the pitch of the notes during performance. FlameCens has been implemented in Python and it is openly available via GitHub. The application has been experimentally validated for different music genres including classical, contemporary, jazz and popular music. These experiments revealed that FlameCens can be a valuable tool for music specialists (i.e. musicians or musicologists) to investigate the expressive performance strategies employed by different musicians, as well as for music audience to enhance their listening experience.Keywords: audio synchronization, computational music analysis, expressive music performance, information visualization
Procedia PDF Downloads 131598 Investigation of the Carbon Dots Optical Properties Using Laser Scanning Confocal Microscopy and TimE-resolved Fluorescence Microscopy
Authors: M. S. Stepanova, V. V. Zakharov, P. D. Khavlyuk, I. D. Skurlov, A. Y. Dubovik, A. L. Rogach
Abstract:
Carbon dots are small carbon-based spherical nanoparticles, which are typically less than 10 nm in size that can be modified with surface passivation and heteroatoms doping. The light-absorbing ability of carbon dots has attracted a significant amount of attention in photoluminescence for bioimaging and fluorescence sensing applications owing to their advantages, such as tunable fluorescence emission, photo- and thermostability and low toxicity. In this study, carbon dots were synthesized by the solvothermal method from citric acid and ethylenediamine dissolved in water. The solution was heated for 5 hours at 200°C and then cooled down to room temperature. The carbon dots films were obtained by evaporation from a high-concentration aqueous solution. The increase of both luminescence intensity and light transmission was obtained as a result of a 405 nm laser exposure to a part of the carbon dots film, which was detected using a confocal laser scanning microscope (LSM 710, Zeiss). Blueshift up to 35 nm of the luminescence spectrum is observed as luminescence intensity, which is increased more than twofold. The exact value of the shift depends on the time of the laser exposure. This shift can be caused by the modification of surface groups at the carbon dots, which are responsible for long-wavelength luminescence. In addition, a shift of the absorption peak by 10 nm and a decrease in the optical density at the wavelength of 350 nm is detected, which is responsible for the absorption of surface groups. The obtained sample was also studied with time-resolved confocal fluorescence microscope (MicroTime 100, PicoQuant), which made it possible to receive a time-resolved photoluminescence image and construct emission decays of the laser-exposed and non-exposed areas. 5 MHz pulse rate impulse laser has been used as a photoluminescence excitation source. Photoluminescence decay was approximated by two exhibitors. The laser-exposed area has the amplitude of the first-lifetime component (A1) twice as much as before, with increasing τ1. At the same time, the second-lifetime component (A2) decreases. These changes evidence a modification of the surface groups of carbon dots. The detected effect can be used to create thermostable fluorescent marks, the physical size of which is bounded by the diffraction limit of the optics (~ 200-300 nm) used for exposure and to improve the optical properties of carbon dots or in the field of optical encryption. Acknowledgements: This work was supported by the Ministry of Science and Higher Education of Russian Federation, goszadanie no. 2019-1080 and financially supported by Government of Russian Federation, Grant 08-08.Keywords: carbon dots, photoactivation, optical properties, photoluminescence and absorption spectra
Procedia PDF Downloads 165597 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach
Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar
Abstract:
Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.Keywords: artificial neural networks, ANN, discrete wavelet transform, DWT, gray-level co-occurrence matrix, GLCM, k-nearest neighbor, KNN, region of interest, ROI
Procedia PDF Downloads 155596 Classifying Affective States in Virtual Reality Environments Using Physiological Signals
Authors: Apostolos Kalatzis, Ashish Teotia, Vishnunarayan Girishan Prabhu, Laura Stanley
Abstract:
Emotions are functional behaviors influenced by thoughts, stimuli, and other factors that induce neurophysiological changes in the human body. Understanding and classifying emotions are challenging as individuals have varying perceptions of their environments. Therefore, it is crucial that there are publicly available databases and virtual reality (VR) based environments that have been scientifically validated for assessing emotional classification. This study utilized two commercially available VR applications (Guided Meditation VR™ and Richie’s Plank Experience™) to induce acute stress and calm state among participants. Subjective and objective measures were collected to create a validated multimodal dataset and classification scheme for affective state classification. Participants’ subjective measures included the use of the Self-Assessment Manikin, emotional cards and 9 point Visual Analogue Scale for perceived stress, collected using a Virtual Reality Assessment Tool developed by our team. Participants’ objective measures included Electrocardiogram and Respiration data that were collected from 25 participants (15 M, 10 F, Mean = 22.28 4.92). The features extracted from these data included heart rate variability components and respiration rate, both of which were used to train two machine learning models. Subjective responses validated the efficacy of the VR applications in eliciting the two desired affective states; for classifying the affective states, a logistic regression (LR) and a support vector machine (SVM) with a linear kernel algorithm were developed. The LR outperformed the SVM and achieved 93.8%, 96.2%, 93.8% leave one subject out cross-validation accuracy, precision and recall, respectively. The VR assessment tool and data collected in this study are publicly available for other researchers.Keywords: affective computing, biosignals, machine learning, stress database
Procedia PDF Downloads 144595 Influence of Non-Formal Physical Education Curriculum, Based on Olympic Pedagogy, for 11-13 Years Old Children Physical Development
Authors: Asta Sarkauskiene
Abstract:
The pedagogy of Olympic education is based upon the main idea of P. de Coubertin, that physical education can and has to support the education of the perfect person, the one who was an aspiration in archaic Greece, when it was looking towards human as a one whole, which is composed of three interconnected functions: physical, psychical and spiritual. The following research question was formulated in the present study: What curriculum of non-formal physical education in school can positively influence physical development of 11-13 years old children? The aim of this study was to formulate and implement curriculum of non-formal physical education, based on Olympic pedagogy, and assess its effectiveness for physical development of 11-13 years old children. The research was conducted in two stages. In the first stage 51 fifth grade children (Mage = 11.3 years) participated in a quasi-experiment for two years. Children were organized into 2 groups: E and C. Both groups shared the duration (1 hour) and frequency (twice a week) but were different in their education curriculum. Experimental group (E) worked under the program developed by us. Priorities of the E group were: training of physical powers in unity with psychical and spiritual powers; integral growth of physical development, physical activity, physical health, and physical fitness; integration of children with lower health and physical fitness level; content that corresponds children needs, abilities, physical and functional powers. Control group (C) worked according to NFPE programs prepared by teachers and approved by school principal and school methodical group. Priorities of the C group were: motion actions teaching and development; physical qualities training; training of the most physically capable children. In the second stage (after four years) 72 sixth graders (Mage = 13.00) attended in the research from the same comprehensive schools. Children were organized into first and second groups. The curriculum of the first group was modified and the second - the same as group C. The focus groups conducted anthropometric (height, weight, BMI) and physiometric (VC, right and left handgrip strength) measurements. Dependent t test indicated that over two years E and C group girls and boys height, weight, right and left handgrip strength indices increased significantly, p < 0.05. E group girls and boys BMI indices did not change significantly, p > 0.05, i.e. height and weight ratio of girls, who participated in NFPE in school, became more proportional. C group girls VC indices did not differ significantly, p > 0.05. Independent t test indicated that in the first and second research stage differences of anthropometric and physiometric measurements of the groups are not significant, p > 0.05. Formulated and implemented curriculum of non-formal education in school, based on olympic pedagogy, had the biggest positive influence on decreasing 11-13 years old children level of BMI and increasing level of VC.Keywords: non – formal physical education, olympic pedagogy, physical development, health sciences
Procedia PDF Downloads 564594 Understanding the Common Antibiotic and Heavy Metal Resistant-Bacterial Load in the Textile Industrial Effluents
Authors: Afroza Parvin, Md. Mahmudul Hasan, Md. Rokunozzaman, Papon Debnath
Abstract:
The effluents of textile industries have considerable amounts of heavy metals, causing potential microbial metal loads if discharged into the environment without treatment. Aim: In this present study, both lactose and non-lactose fermenting bacterial isolates were isolated from textile industrial effluents of a specific region of Bangladesh, named Savar, to compare and understand the load of heavy metals in these microorganisms determining the effects of heavy metal resistance properties on antibiotic resistance. Methods: Five different textile industrial canals of Savar were selected, and effluent samples were collected in 2016 between June to August. Total bacterial colony (TBC) was counted for day 1 to day 5 for 10-6 dilution of samples to 10-10 dilution. All the isolates were isolated and selected using 4 differential media, and tested for the determination of minimum inhibitory concentration (MIC) of heavy metals and antibiotic susceptibility test with plate assay method and modified Kirby-Bauer disc diffusion method, respectively. To detect the combined effect of heavy metals and antibiotics, a binary exposure experiment was performed, and to understand the plasmid profiling plasmid DNA was extracted by alkaline lysis method of some selective isolates. Results: Most of the cases, the colony forming units (CFU) per plate for 50 ul diluted sample were uncountable at 10-6 dilution, however, countable for 10-10 dilution and it didn’t vary much from canal to canal. A total of 50 Shigella, 50 Salmonella, and 100 E.coli (Escherichia coli) like bacterial isolates were selected for this study where the MIC was less than or equal to 0.6 mM for 100% Shigella and Salmonella like isolates, however, only 3% E. coli like isolates had the same MIC for nickel (Ni). The MIC for chromium (Cr) was less than or equal to 2.0 mM for 16% Shigella, 20% Salmonella, and 17% E. coli like isolates. Around 60% of both Shigella and Salmonella, but only 20% of E.coli like isolates had a MIC of less than or equal to 1.2 mM for lead (Pb). The most prevalent resistant pattern for azithromycin (AZM) for Shigella and Salmonella like isolates was found 38% and 48%, respectively; however, for E.coli like isolates, the highest pattern (36%) was found for sulfamethoxazole-trimethoprim (SXT). In the binary exposure experiment, antibiotic zone of inhibition was mostly increased in the presence of heavy metals for all types of isolates. The highest sized plasmid was found 21 Kb and 14 Kb for lactose and non-lactose fermenting isolates, respectively. Conclusion: Microbial resistance to antibiotics and metal ions, has potential health hazards because these traits are generally associated with transmissible plasmids. Microorganisms resistant to antibiotics and tolerant to metals appear as a result of exposure to metal-contaminated environments.Keywords: antibiotics, effluents, heavy metals, minimum inhibitory concentration, resistance
Procedia PDF Downloads 316593 High-Pressure Polymorphism of 4,4-Bipyridine Hydrobromide
Authors: Michalina Aniola, Andrzej Katrusiak
Abstract:
4,4-Bipyridine is an important compound often used in chemical practice and more recently frequently applied for designing new metal organic framework (MoFs). Here we present a systematic high-pressure study of its hydrobromide salt. 4,4-Bipyridine hydrobromide monohydrate, 44biPyHBrH₂O, at ambient-pressure is orthorhombic, space group P212121 (phase a). Its hydrostatic compression shows that it is stable to 1.32 GPa at least. However, the recrystallization above 0.55 GPa reveals a new hidden b-phase (monoclinic, P21/c). Moreover, when the 44biPyHBrH2O is heated to high temperature the chemical reactions of this compound in methanol solution can be observed. High-pressure experiments were performed using a Merrill-Bassett diamond-anvil cell (DAC), modified by mounting the anvils directly on the steel supports, and X-ray diffraction measurements were carried out on a KUMA and Excalibur diffractometer equipped with an EOS CCD detector. At elevated pressure, the crystal of 44biPyHBrH₂O exhibits several striking and unexpected features. No signs of instability of phase a were detected to 1.32 GPa, while phase b becomes stable at above 0.55 GPa, as evidenced by its recrystallizations. Phases a and b of 44biPyHBrH2O are partly isostructural: their unit-cell dimensions and the arrangement of ions and water molecules are similar. In phase b the HOH-Br- chains double the frequency of their zigzag motifs, compared to phase a, and the 44biPyH+ cations change their conformation. Like in all monosalts of 44biPy determined so far, in phase a the pyridine rings are twisted by about 30 degrees about bond C4-C4 and in phase b they assume energy-unfavorable planar conformation. Another unusual feature of 44biPyHBrH2O is that all unit-cell parameters become longer on the transition from phase a to phase b. Thus the volume drop on the transition to high-pressure phase b totally depends on the shear strain of the lattice. Higher temperature triggers chemical reactions of 44biPyHBrH2O with methanol. When the saturated methanol solution compound precipitated at 0.1 GPa and temperature of 423 K was required to dissolve all the sample, the subsequent slow recrystallization at isochoric conditions resulted in disalt 4,4-bipyridinium dibromide. For the 44biPyHBrH2O sample sealed in the DAC at 0.35 GPa, then dissolved at isochoric conditions at 473 K and recrystallized by slow controlled cooling, a reaction of N,N-dimethylation took place. It is characteristic that in both high-pressure reactions of 44biPyHBrH₂O the unsolvated disalt products were formed and that free base 44biPy and H₂O remained in the solution. The observed reactions indicate that high pressure destabilized ambient-pressure salts and favors new products. Further studies on pressure-induced reactions are carried out in order to better understand the structural preferences induced by pressure.Keywords: conformation, high-pressure, negative area compressibility, polymorphism
Procedia PDF Downloads 247592 Cytotoxicological Evaluation of a Folate Receptor Targeting Drug Delivery System Based on Cyclodextrins
Authors: Caroline Mendes, Mary McNamara, Orla Howe
Abstract:
For chemotherapy, a drug delivery system should be able to specifically target cancer cells and deliver the therapeutic dose without affecting normal cells. Folate receptors (FR) can be considered key targets since they are commonly over-expressed in cancer cells and they are the molecular marker used in this study. Here, cyclodextrin (CD) has being studied as a vehicle for delivering the chemotherapeutic drug, methotrexate (MTX). CDs have the ability to form inclusion complexes, in which molecules of suitable dimensions are included within the CD cavity. In this study, β-CD has been modified using folic acid so as to specifically target the FR molecular marker. Thus, the system studied here for drug delivery consists of β-CD, folic acid and MTX (CDEnFA:MTX). Cellular uptake of folic acid is mediated with high affinity by folate receptors while the cellular uptake of antifolates, such as MTX, is mediated with high affinity by the reduced folate carriers (RFCs). This study addresses the gene (mRNA) and protein expression levels of FRs and RFCs in the cancer cell lines CaCo-2, SKOV-3, HeLa, MCF-7, A549 and the normal cell line BEAS-2B, quantified by real-time polymerase chain reaction (real-time PCR) and flow cytometry, respectively. From that, four cell lines with different levels of FRs, were chosen for cytotoxicity assays of MTX and CDEnFA:MTX using the MTT assay. Real-time PCR and flow cytometry data demonstrated that all cell lines ubiquitously express moderate levels of RFC. These experiments have also shown that levels of FR protein in CaCo-2 cells are high, while levels in SKOV-3, HeLa and MCF-7 cells are moderate. A549 and BEAS-2B cells express low levels of FR protein. FRs are highly expressed in all the cancer cell lines analysed when compared to the normal cell line BEAS-2B. The cell lines CaCo-2, MCF-7, A549 and BEAS-2B were used in the cell viability assays. 48 hours treatment with the free drug and the complex resulted in IC50 values of 93.9 µM ± 9.2 and 56.0 µM ± 4.0 for CaCo-2 for free MTX and CDEnFA:MTX respectively, 118.2 µM ± 10.8 and 97.8 µM ± 12.3 for MCF-7, 36.4 µM ± 6.9 and 75.0 µM ± 8.5 for A549 and 132.6 µM ± 12.1 and 288.1 µM ± 16.3 for BEAS-2B. These results demonstrate that MTX is more toxic towards cell lines expressing low levels of FR, such as the BEAS-2B. More importantly, these results demonstrate that the inclusion complex CDEnFA:MTX showed greater cytotoxicity than the free drug towards the high FR expressing CaCo-2 cells, indicating that it has potential to target this receptor, enhancing the specificity and the efficiency of the drug.Keywords: cyclodextrins, cancer treatment, drug delivery, folate receptors, reduced folate carriers
Procedia PDF Downloads 302591 Hybridization of Manually Extracted and Convolutional Features for Classification of Chest X-Ray of COVID-19
Authors: M. Bilal Ishfaq, Adnan N. Qureshi
Abstract:
COVID-19 is the most infectious disease these days, it was first reported in Wuhan, the capital city of Hubei in China then it spread rapidly throughout the whole world. Later on 11 March 2020, the World Health Organisation (WHO) declared it a pandemic. Since COVID-19 is highly contagious, it has affected approximately 219M people worldwide and caused 4.55M deaths. It has brought the importance of accurate diagnosis of respiratory diseases such as pneumonia and COVID-19 to the forefront. In this paper, we propose a hybrid approach for the automated detection of COVID-19 using medical imaging. We have presented the hybridization of manually extracted and convolutional features. Our approach combines Haralick texture features and convolutional features extracted from chest X-rays and CT scans. We also employ a minimum redundancy maximum relevance (MRMR) feature selection algorithm to reduce computational complexity and enhance classification performance. The proposed model is evaluated on four publicly available datasets, including Chest X-ray Pneumonia, COVID-19 Pneumonia, COVID-19 CTMaster, and VinBig data. The results demonstrate high accuracy and effectiveness, with 0.9925 on the Chest X-ray pneumonia dataset, 0.9895 on the COVID-19, Pneumonia and Normal Chest X-ray dataset, 0.9806 on the Covid CTMaster dataset, and 0.9398 on the VinBig dataset. We further evaluate the effectiveness of the proposed model using ROC curves, where the AUC for the best-performing model reaches 0.96. Our proposed model provides a promising tool for the early detection and accurate diagnosis of COVID-19, which can assist healthcare professionals in making informed treatment decisions and improving patient outcomes. The results of the proposed model are quite plausible and the system can be deployed in a clinical or research setting to assist in the diagnosis of COVID-19.Keywords: COVID-19, feature engineering, artificial neural networks, radiology images
Procedia PDF Downloads 76590 Modeling of Bipolar Charge Transport through Nanocomposite Films for Energy Storage
Authors: Meng H. Lean, Wei-Ping L. Chu
Abstract:
The effects of ferroelectric nanofiller size, shape, loading, and polarization, on bipolar charge injection, transport, and recombination through amorphous and semicrystalline polymers are studied. A 3D particle-in-cell model extends the classical electrical double layer representation to treat ferroelectric nanoparticles. Metal-polymer charge injection assumes Schottky emission and Fowler-Nordheim tunneling, migration through field-dependent Poole-Frenkel mobility, and recombination with Monte Carlo selection based on collision probability. A boundary integral equation method is used for solution of the Poisson equation coupled with a second-order predictor-corrector scheme for robust time integration of the equations of motion. The stability criterion of the explicit algorithm conforms to the Courant-Friedrichs-Levy limit. Trajectories for charge that make it through the film are curvilinear paths that meander through the interspaces. Results indicate that charge transport behavior depends on nanoparticle polarization with anti-parallel orientation showing the highest leakage conduction and lowest level of charge trapping in the interaction zone. Simulation prediction of a size range of 80 to 100 nm to minimize attachment and maximize conduction is validated by theory. Attached charge fractions go from 2.2% to 97% as nanofiller size is decreased from 150 nm to 60 nm. Computed conductivity of 0.4 x 1014 S/cm is in agreement with published data for plastics. Charge attachment is increased with spheroids due to the increase in surface area, and especially so for oblate spheroids showing the influence of larger cross-sections. Charge attachment to nanofillers and nanocrystallites increase with vol.% loading or degree of crystallinity, and saturate at about 40 vol.%.Keywords: nanocomposites, nanofillers, electrical double layer, bipolar charge transport
Procedia PDF Downloads 355589 Globally Convergent Sequential Linear Programming for Multi-Material Topology Optimization Using Ordered Solid Isotropic Material with Penalization Interpolation
Authors: Darwin Castillo Huamaní, Francisco A. M. Gomes
Abstract:
The aim of the multi-material topology optimization (MTO) is to obtain the optimal topology of structures composed by many materials, according to a given set of constraints and cost criteria. In this work, we seek the optimal distribution of materials in a domain, such that the flexibility of the structure is minimized, under certain boundary conditions and the intervention of external forces. In the case we have only one material, each point of the discretized domain is represented by two values from a function, where the value of the function is 1 if the element belongs to the structure or 0 if the element is empty. A common way to avoid the high computational cost of solving integer variable optimization problems is to adopt the Solid Isotropic Material with Penalization (SIMP) method. This method relies on the continuous interpolation function, power function, where the base variable represents a pseudo density at each point of domain. For proper exponent values, the SIMP method reduces intermediate densities, since values other than 0 or 1 usually does not have a physical meaning for the problem. Several extension of the SIMP method were proposed for the multi-material case. The one that we explore here is the ordered SIMP method, that has the advantage of not being based on the addition of variables to represent material selection, so the computational cost is independent of the number of materials considered. Although the number of variables is not increased by this algorithm, the optimization subproblems that are generated at each iteration cannot be solved by methods that rely on second derivatives, due to the cost of calculating the second derivatives. To overcome this, we apply a globally convergent version of the sequential linear programming method, which solves a linear approximation sequence of optimization problems.Keywords: globally convergence, multi-material design ordered simp, sequential linear programming, topology optimization
Procedia PDF Downloads 315588 Hydrogen Production at the Forecourt from Off-Peak Electricity and Its Role in Balancing the Grid
Authors: Abdulla Rahil, Rupert Gammon, Neil Brown
Abstract:
The rapid growth of renewable energy sources and their integration into the grid have been motivated by the depletion of fossil fuels and environmental issues. Unfortunately, the grid is unable to cope with the predicted growth of renewable energy which would lead to its instability. To solve this problem, energy storage devices could be used. Electrolytic hydrogen production from an electrolyser is considered a promising option since it is a clean energy source (zero emissions). Choosing flexible operation of an electrolyser (producing hydrogen during the off-peak electricity period and stopping at other times) could bring about many benefits like reducing the cost of hydrogen and helping to balance the electric systems. This paper investigates the price of hydrogen during flexible operation compared with continuous operation, while serving the customer (hydrogen filling station) without interruption. The optimization algorithm is applied to investigate the hydrogen station in both cases (flexible and continuous operation). Three different scenarios are tested to see whether the off-peak electricity price could enhance the reduction of the hydrogen cost. These scenarios are: Standard tariff (1 tier system) during the day (assumed 12 p/kWh) while still satisfying the demand for hydrogen; using off-peak electricity at a lower price (assumed 5 p/kWh) and shutting down the electrolyser at other times; using lower price electricity at off-peak times and high price electricity at other times. This study looks at Derna city, which is located on the coast of the Mediterranean Sea (32° 46′ 0 N, 22° 38′ 0 E) with a high potential for wind resource. Hourly wind speed data which were collected over 24½ years from 1990 to 2014 were in addition to data on hourly radiation and hourly electricity demand collected over a one-year period, together with the petrol station data.Keywords: hydrogen filling station off-peak electricity, renewable energy, off-peak electricity, electrolytic hydrogen
Procedia PDF Downloads 232587 The Usage of Negative Emotive Words in Twitter
Authors: Martina Katalin Szabó, István Üveges
Abstract:
In this paper, the usage of negative emotive words is examined on the basis of a large Hungarian twitter-database via NLP methods. The data is analysed from a gender point of view, as well as changes in language usage over time. The term negative emotive word refers to those words that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g. rohadt jó ’damn good’) or a sentiment expression with positive polarity despite their negative prior polarity (e.g. brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’. Based on the findings of several authors, the same phenomenon can be found in other languages, so it is probably a language-independent feature. For the recent analysis, 67783 tweets were collected: 37818 tweets (19580 tweets written by females and 18238 tweets written by males) in 2016 and 48344 (18379 tweets written by females and 29965 tweets written by males) in 2021. The goal of the research was to make up two datasets comparable from the viewpoint of semantic changes, as well as from gender specificities. An exhaustive lexicon of Hungarian negative emotive intensifiers was also compiled (containing 214 words). After basic preprocessing steps, tweets were processed by ‘magyarlanc’, a toolkit is written in JAVA for the linguistic processing of Hungarian texts. Then, the frequency and collocation features of all these words in our corpus were automatically analyzed (via the analysis of parts-of-speech and sentiment values of the co-occurring words). Finally, the results of all four subcorpora were compared. Here some of the main outcomes of our analyses are provided: There are almost four times fewer cases in the male corpus compared to the female corpus when the negative emotive intensifier modified a negative polarity word in the tweet (e.g., damn bad). At the same time, male authors used these intensifiers more frequently, modifying a positive polarity or a neutral word (e.g., damn good and damn big). Results also pointed out that, in contrast to female authors, male authors used these words much more frequently as a positive polarity word as well (e.g., brutális, ahogy ez a férfi rajzol ’it’s awesome (lit. brutal) how this guy draws’). We also observed that male authors use significantly fewer types of emotive intensifiers than female authors, and the frequency proportion of the words is more balanced in the female corpus. As for changes in language usage over time, some notable differences in the frequency and collocation features of the words examined were identified: some of the words collocate with more positive words in the 2nd subcorpora than in the 1st, which points to the semantic change of these words over time.Keywords: gender differences, negative emotive words, semantic changes over time, twitter
Procedia PDF Downloads 206586 Operation Cycle Model of ASz62IR Radial Aircraft Engine
Authors: M. Duk, L. Grabowski, P. Magryta
Abstract:
Today's very important element relating to air transport is the environment impact issues. Nowadays there are no emissions standards for turbine and piston engines used in air transport. However, it should be noticed that the environmental effect in the form of exhaust gases from aircraft engines should be as small as possible. For this purpose, R&D centers often use special software to simulate and to estimate the negative effect of engine working process. For cooperation between the Lublin University of Technology and the Polish aviation company WSK "PZL-KALISZ" S.A., to achieve more effective operation of the ASz62IR engine, one of such tools have been used. The AVL Boost software allows to perform 1D simulations of combustion process of piston engines. ASz62IR is a nine-cylinder aircraft engine in a radial configuration. In order to analyze the impact of its working process on the environment, the mathematical model in the AVL Boost software have been made. This model contains, among others, model of the operation cycle of the cylinders. This model was based on a volume change in combustion chamber according to the reciprocating movement of a piston. The simplifications that all of the pistons move identically was assumed. The changes in cylinder volume during an operating cycle were specified. Those changes were important to determine the energy balance of a cylinder in an internal combustion engine which is fundamental for a model of the operating cycle. The calculations for cylinder thermodynamic state were based on the first law of thermodynamics. The change in the mass in the cylinder was calculated from the sum of inflowing and outflowing masses including: cylinder internal energy, heat from the fuel, heat losses, mass in cylinder, cylinder pressure and volume, blowdown enthalpy, evaporation heat etc. The model assumed that the amount of heat released in combustion process was calculated from the pace of combustion, using Vibe model. For gas exchange, it was also important to consider heat transfer in inlet and outlet channels because of much higher values there than for flow in a straight pipe. This results from high values of heat exchange coefficients and temperature coefficients near valves and valve seats. A Zapf modified model of heat exchange was used. To use the model with the flight scenarios, the impact of flight altitude on engine performance has been analyze. It was assumed that the pressure and temperature at the inlet and outlet correspond to the values resulting from the model for International Standard Atmosphere (ISA). Comparing this model of operation cycle with the others submodels of the ASz62IR engine, it could be noticed, that a full analysis of the performance of the engine, according to the ISA conditions, can be made. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, underKeywords: aviation propulsion, AVL Boost, engine model, operation cycle, aircraft engine
Procedia PDF Downloads 292