Search results for: radial installation limit error
721 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA
Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell
Abstract:
Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis
Procedia PDF Downloads 230720 Comprehensive Multilevel Practical Condition Monitoring Guidelines for Power Cables in Industries: Case Study of Mobarakeh Steel Company in Iran
Authors: S. Mani, M. Kafil, E. Asadi
Abstract:
Condition Monitoring (CM) of electrical equipment has gained remarkable importance during the recent years; due to huge production losses, substantial imposed costs and increases in vulnerability, risk and uncertainty levels. Power cables feed numerous electrical equipment such as transformers, motors, and electric furnaces; thus their condition assessment is of a very great importance. This paper investigates electrical, structural and environmental failure sources, all of which influence cables' performances and limit their uptimes; and provides a comprehensive framework entailing practical CM guidelines for maintenance of cables in industries. The multilevel CM framework presented in this study covers performance indicative features of power cables; with a focus on both online and offline diagnosis and test scenarios, and covers short-term and long-term threats to the operation and longevity of power cables. The study, after concisely overviewing the concept of CM, thoroughly investigates five major areas of power quality, Insulation Quality features of partial discharges, tan delta and voltage withstand capabilities, together with sheath faults, shield currents and environmental features of temperature and humidity; and elaborates interconnections and mutual impacts between those areas; using mathematical formulation and practical guidelines. Detection, location, and severity identification methods for every threat or fault source are also elaborated. Finally, the comprehensive, practical guidelines presented in the study are presented for the specific case of Electric Arc Furnace (EAF) feeder MV power cables in Mobarakeh Steel Company (MSC), the largest steel company in MENA region, in Iran. Specific technical and industrial characteristics and limitations of a harsh industrial environment like MSC EAF feeder cable tunnels are imposed on the presented framework; making the suggested package more practical and tangible.Keywords: condition monitoring, diagnostics, insulation, maintenance, partial discharge, power cables, power quality
Procedia PDF Downloads 228719 Occurrence and Levels of Mycotoxins in On-Farm Stored Sesame in Major-Growing Districts of Ethiopia
Authors: S. Alemayehu, F. A. Abera, K. M. Ayimut, R. Mahroof, J. Harvey, B. Subramanyam
Abstract:
The occurrence of mycotoxins in sesame seeds poses a significant threat to food safety and the economy in Ethiopia. This study aimed to determine the levels and occurrence of mycotoxins in on-farm stored sesame seeds in major-growing districts of Ethiopia. A total of 470 sesame seed samples were collected from randomly selected farmers' storage structures in five major-growing districts using purposive sampling techniques. An enzyme-linked immunosorbent assay (ELISA) was used to analyze the collected samples for the presence of four mycotoxins: total aflatoxins (AFT), ochratoxin A (OTA), total fumonisins (FUM), and deoxynivalenol (DON). The study found that all samples contained varying levels of mycotoxins, with AFT and DON being the most prevalent. AFT concentrations in detected samples ranged from 2.5 to 27.8 parts per billion (ppb), with a mean concentration of 13.8 ppb. OTA levels ranged from 5.0 ppb to 9.7 ppb, with a mean level of 7.1 ppb. Total fumonisin concentrations ranged from 300 to 1300 ppb in all samples, with a mean of 800 ppb. DON concentrations ranged from 560 to 700 ppb in the analyzed samples. The majority (96.8%) of the samples were safe from AFT, FUM, and DON mean levels when compared to the Federal Drug Administration maximum limit. AFT-OTA, DON-OTA, AFT-FUM, FUM-DON, and FUM-OTA, respectively, had co-occurrence rates of 44.0, 38.3, 33.8, 30.2, 29.8 and 26.0% for mycotoxins. On average, 37.2% of the sesame samples had fungal infection, and seed germination rates ranged from 66.8% to 91.1%. The Limmu district had higher levels of total aflatoxins, kernel infection, and lower germination rates than other districts. The Wollega variety of sesame had higher kernel infection, total aflatoxins concentration, and lower germination rates than other varieties. Grain age had a statistically significant (p<0.05) effect on both kernel infection and germination. The storage methods used for sesame in major-growing districts of Ethiopia favor mycotoxin-producing fungi. As the levels of mycotoxins in sesame are of public health significance, stakeholders should come together to identify secure and suitable storage technologies to maintain the quantity and quality of sesame at the level of smallholder farmers. This study suggests the need for suitable storage technologies to maintain the quality of sesame and reduce the risk of mycotoxin contamination.Keywords: districts, seed germination, kernel infection, moisture content, relative humidity, temperature
Procedia PDF Downloads 133718 Hot Carrier Photocurrent as a Candidate for an Intrinsic Loss in a Single Junction Solar Cell
Authors: Jonas Gradauskas, Oleksandr Masalskyi, Ihor Zharchenko
Abstract:
The advancement in improving the efficiency of conventional solar cells toward the Shockley-Queisser limit seems to be slowing down or reaching a point of saturation. The challenges hindering the reduction of this efficiency gap can be categorized into extrinsic and intrinsic losses, with the former being theoretically avoidable. Among the five intrinsic losses, two — the below-Eg loss (resulting from non-absorption of photons with energy below the semiconductor bandgap) and thermalization loss —contribute to approximately 55% of the overall lost fraction of solar radiation at energy bandgap values corresponding to silicon and gallium arsenide. Efforts to minimize the disparity between theoretically predicted and experimentally achieved efficiencies in solar cells necessitate the integration of innovative physical concepts. Hot carriers (HC) present a contemporary approach to addressing this challenge. The significance of hot carriers in photovoltaics is not fully understood. Although their excessive energy is thought to indirectly impact a cell's performance through thermalization loss — where the excess energy heats the lattice, leading to efficiency loss — evidence suggests the presence of hot carriers in solar cells. Despite their exceptionally brief lifespan, tangible benefits arise from their existence. The study highlights direct experimental evidence of hot carrier effect induced by both below- and above-bandgap radiation in a singlejunction solar cell. Photocurrent flowing across silicon and GaAs p-n junctions is analyzed. The photoresponse consists, on the whole, of three components caused by electron-hole pair generation, hot carriers, and lattice heating. The last two components counteract the conventional electron-hole generation-caused current required for successful solar cell operation. Also, a model of the temperature coefficient of the voltage change of the current–voltage characteristic is used to obtain the hot carrier temperature. The distribution of cold and hot carriers is analyzed with regard to the potential barrier height of the p-n junction. These discoveries contribute to a better understanding of hot carrier phenomena in photovoltaic devices and are likely to prompt a reevaluation of intrinsic losses in solar cells.Keywords: solar cell, hot carriers, intrinsic losses, efficiency, photocurrent
Procedia PDF Downloads 65717 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. It also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 60716 Valorization, Conservation and Sustainable Production of Medicinal Plants in Morocco
Authors: Elachouri Mostafa, Fakchich Jamila, Lazaar Jamila, Elmadmad Mohammed, Marhom Mostafa
Abstract:
Of course, there has been a great growth in scientific information about medicinal plants in recent decades, but in many ways this has proved poor compensation, because such information is accessible, in practice, only to a very few people and anyway, rather little of it is relevant to problems of management and utilization, as encountered in the field. Active compounds are used in most traditional medicines and play an important role in advancing sustainable rural livelihoods through their conservation, cultivation, propagation, marketing and commercialization. Medicinal herbs are great resources for various pharmaceutical compounds and urgent measures are required to protect these plant species from their natural destruction and disappearance. Indeed, there is a real danger of indigenous Arab medicinal practices and knowledge disappearing altogether, further weakening traditional Arab culture and creating more insecurity, as well as forsaking a resource of inestimable economic and health care importance. As scientific approach, the ethnopharmacological investigation remains the principal way to improve, evaluate, and increase the odds of finding of biologically active compounds derived from medicinal plants. As developing country, belonging to the Mediterranean basin, Morocco country is endowed with resources of medicinal and aromatic plants. These plants have been used over the millennia for human welfare, even today. Besides, Morocco has a large plant biodiversity, in fact, its medicinal flora account more than 4200 species growing on various bioclimatic zones from subhumide to arid and Saharan. Nevertheless, the human and animal pressure resulting from the increase of rural population needs has led to degradation of this patrimony. In this paper, we focus our attention on ethnopharmacological studies carried out in Morocco. The goal of this work is to clarify the importance of herbs as platform for drugs discovery and further development, to highlight the importance of ethnopharmacological study as approach on discovery of natural products in the health care field, and to discuss the limit of ethnopharmacological investigation of drug discovery in Morocco.Keywords: Morocco, medicinal plants, ethnopharmacology, natural products, drug-discovery
Procedia PDF Downloads 316715 Multimodality in Storefront Windows: The Impact of Verbo-Visual Design on Consumer Behavior
Authors: Angela Bargenda, Erhard Lick, Dhoha Trabelsi
Abstract:
Research in retailing has identified the importance of atmospherics as an essential element in enhancing store image, store patronage intentions, and the overall shopping experience in a retail environment. However, in the area of atmospherics, store window design, which represents an essential component of external store atmospherics, remains a vastly underrepresented phenomenon in extant scholarship. This paper seeks to fill this gap by exploring the relevance of store window design as an atmospheric tool. In particular, empirical evidence of theme-based theatrical store front windows, which put emphasis on the use of verbo-visual design elements, was found in Paris and New York. The purpose of this study was to identify to what extent such multimodal window designs of high-end department stores in metropolitan cities have an impact on store entry decisions and attitudes towards the retailer’s image. As theoretical construct, the linguistic concept of multimodality and Mehrabian’s and Russell’s model in environmental psychology were applied. To answer the research question, two studies were conducted. For Study 1 a case study approach was selected to define three different types of store window designs based on different types of visual-verbal relations. Each of these types of store window design represented a different level of cognitive elaboration required for the decoding process. Study 2 consisted of an on-line survey carried out among more than 300 respondents to examine the influence of these three types of store window design on the consumer behavioral variables mentioned above. The results of this study show that the higher the cognitive elaboration needed to decode the message of the store window, the lower the store entry propensity. In contrast, the higher the cognitive elaboration, the higher the perceived image of the retailer’s image. One important conclusion is that in order to increase consumers’ propensity to enter stores with theme-based theatrical store front windows, retailers need to limit the cognitive elaboration required to decode their verbo-visual window design.Keywords: consumer behavior, multimodality, store atmospherics, store window design
Procedia PDF Downloads 202714 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments
Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis
Abstract:
In global navigation satellite systems (GNSS), denied settings such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation, thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.Keywords: autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion
Procedia PDF Downloads 207713 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 18712 The Relationships between Energy Consumption, Carbon Dioxide (CO2) Emissions, and GDP for Egypt: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), CO2 emissions and gross domestic product (GDP) for Egypt using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen maximum likelihood method for co-integration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests some negative impacts of the CO2 emissions and the coal and natural gas use on the GDP. Conversely, a positive long-run causality from the electricity consumption to the GDP is found to be significant in Egypt during the period. In the short-run, some positive unidirectional causalities exist, running from the coal consumption to the GDP, and the CO2 emissions and the natural gas use. Further, the GDP and the electricity use are positively influenced by the consumption of petroleum products and the direct combustion of crude oil. Overall, the results support arguments that there are relationships among environmental quality, energy use, and economic output in both the short term and long term; however, the effects may differ due to the sources of energy, such as in the case of Egypt for the period of 1980-2010.Keywords: CO2 emissions, Egypt, energy consumption, GDP, time series analysis
Procedia PDF Downloads 615711 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals
Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar
Abstract:
Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks
Procedia PDF Downloads 186710 The Effect of Artificial Intelligence on Electric Machines and Welding
Authors: Mina Malak Zakaria Henin
Abstract:
The finite detail evaluation of magnetic fields in electromagnetic devices shows that the machine cores revel in extraordinary flux patterns consisting of alternating and rotating fields. The rotating fields are generated in different configurations variety, among circular and elliptical, with distinctive ratios between the fundamental and minor axes of the flux locus. Experimental measurements on electrical metal uncovered one-of-a-kind flux patterns that divulge distinctive magnetic losses in the samples below the test. Therefore, electric machines require unique interest throughout the core loss calculation technique to bear in mind the flux styles. In this look, a circular rotational unmarried sheet tester is employed to measure the middle losses in the electric-powered metallic pattern of M36G29. The sample becomes exposed to alternating fields, circular areas, and elliptical fields with axis ratios of zero.2, zero. Four, 0.6 and 0.8. The measured statistics changed into applied on 6-4 switched reluctance motors at 3 distinctive frequencies of interest to the industry 60 Hz, 400 Hz, and 1 kHz. The effects reveal an excessive margin of error, which can arise at some point in the loss calculations if the flux pattern difficulty is overlooked. The mistake in exceptional components of the gadget associated with considering the flux styles may be around 50%, 10%, and a couple of at 60Hz, 400Hz, and 1 kHz, respectively. The future paintings will focus on the optimization of gadget geometrical shape, which has a primary effect on the flux sample on the way to decrease the magnetic losses in system cores.Keywords: converters, electric machines, MEA (more electric aircraft), PES (power electronics systems) synchronous machine, vector control Multi-machine/ Multi-inverter, matrix inverter, Railway tractionalternating core losses, finite element analysis, rotational core losses
Procedia PDF Downloads 28709 Studies on the Effect of Dehydration Techniques, Treatments, Packaging Material and Methods on the Quality of Buffalo Meat during Ambient Temperature Storage
Authors: Tariq Ahmad Safapuri, Saghir Ahmad, Farhana Allai
Abstract:
The present study was conducted to evaluate the effect dehydration techniques (polyhouse and tray drying), different treatment (SHMP, SHMP+ salt, salt + turmeric), different packaging material (HDPE, combination film), and different packaging methods (air, vacuum, CO2 Flush) on quality of dehydrated buffalo meat during ambient temperature storage. The quality measuring parameters included physico-chemical characteristics i.e. pH, rehydration ratio, moisture content and microbiological characteristics viz total plate content. It was found that the treatment of (SHMP, SHMP + salt, salt + turmeric increased the pH. Moisture Content of dehydrated meat samples were found in between 7.20% and 5.54%.the rehydration ratio of salt+ turmeric treated sample was found to be highest and lowest for controlled meat sample. the bacterial count log TPC/g of salt + turmeric and tray dried was lowest i.e. 1.80.During ambient temperature storage ,there was no considerable change in pH of dehydrated sample till 150 days. however the moisture content of samples increased in different packaging system in different manner. The highest moisture rise was found in case of controlled meat sample HDPE/air packed while the lowest increase was reported for SHMP+ Salt treated Packed by vacuum in combination film packed sample. Rehydration ratio was found considerably affected in case of HDPE and air packed sample dehydrated in polyhouse after 150 days of ambient storage. While there was a very little change in the rehydration ratio of meat samples packed in combination film CO2 flush system. The TPC was found under safe limit even after 150 days of storage. The microbial count was found to be lowest for salt+ turmeric treated samples after 150 days of storage.Keywords: ambient temperature, dehydration technique, rehydration ratio, SHMP (sodium hexa meta phosphate), HDPE (high density polyethelene)
Procedia PDF Downloads 418708 Optimization of Biomass Components from Rice Husk Treated with Trichophyton Soudanense and Trichophyton Mentagrophyte and Effect of Yeast on the Bio-Ethanol Yield
Authors: Chukwuma S. Ezeonu, Ikechukwu N. E. Onwurah, Uchechukwu U. Nwodo, Chibuike S. Ubani, Chigozie M. Ejikeme
Abstract:
Trichophyton soudanense and Trichophyton mentagrophyte were isolated from the rice mill environment, cultured and used singly and as di-culture in the treatment of measure quantities of preheated rice husk. Optimized conditions studied showed that carboxymethylcellulase (CMCellulase) activity of 57.61 µg/ml/min was optimum for Trichophyton mentagrophyte heat pretreated rice husk crude enzymes at 50oC and 80oC respectively. Duration of 120 hours (5 days) gave the highest CMcellulase activity of 75.84 µg/ml/min for crude enzyme of Trichophyton mentagrophyte heat pretreated rice husk. However, 96 hours (4 days) duration gave maximum activity of 58.21 µg/ml/min for crude enzyme of Trichophyton soudanense heat pretreated rice husk. Highest CMCellulase activities of 67.02 µg/ml/min and 69.02 µg/ml/min at pH of 5 were recorded for crude enzymes of monocultures of Trichophyton soudanense (TS) and Trichophyton mentagrophyte (TM) heat pretreated rice husk respectively. Biomass components showed that rice husk cooled after heating followed by treatment with Trichophyton mentagrophyte gave 44.50 ± 10.90 (% ± Standard Error of Mean) cellulose as the highest yield. Maximum total lignin value of 28.90 ± 1.80 (% ± SEM) was obtained from pre-heated rice husk treated with di-culture of Trichophyton soudanense and Trichophyton mentagrophyte (TS+TM). The hemicellulose content of 30.50 ± 2.12 (% ± SEM) from pre-heated rice husk treated with Trichophyton soudanense (TS); lignin value of 28.90 ± 1.80 from pre-heated rice husk treated with di-culture of Trichophyton soudanense and Trichophyton mentagrophyte (TS+TM); also carbohydrate content of 16.79 ± 9.14 (% ± SEM) , reducing and non-reducing sugar values of 2.66 ± 0.45 and 14.13 ± 8.69 (% ± SEM) were all obtained from for pre- heated rice husk treated with Trichophyton mentagrophyte (TM). All the values listed above were the highest values obtained from each rice husk treatment. The pre-heated rice husk treated with Trichophyton mentagrophyte (TM) fermented with palmwine yeast gave bio-ethanol value of 11.11 ± 0.21 (% ± Standard Deviation) as the highest yield.Keywords: Trichophyton soudanense, Trichophyton mentagrophyte, biomass, bioethanol, rice husk
Procedia PDF Downloads 680707 Prediction of Finned Projectile Aerodynamics Using a Lattice-Boltzmann Method CFD Solution
Authors: Zaki Abiza, Miguel Chavez, David M. Holman, Ruddy Brionnaud
Abstract:
In this paper, the prediction of the aerodynamic behavior of the flow around a Finned Projectile will be validated using a Computational Fluid Dynamics (CFD) solution, XFlow, based on the Lattice-Boltzmann Method (LBM). XFlow is an innovative CFD software developed by Next Limit Dynamics. It is based on a state-of-the-art Lattice-Boltzmann Method which uses a proprietary particle-based kinetic solver and a LES turbulent model coupled with the generalized law of the wall (WMLES). The Lattice-Boltzmann method discretizes the continuous Boltzmann equation, a transport equation for the particle probability distribution function. From the Boltzmann transport equation, and by means of the Chapman-Enskog expansion, the compressible Navier-Stokes equations can be recovered. However to simulate compressible flows, this method has a Mach number limitation because of the lattice discretization. Thanks to this flexible particle-based approach the traditional meshing process is avoided, the discretization stage is strongly accelerated reducing engineering costs, and computations on complex geometries are affordable in a straightforward way. The projectile that will be used in this work is the Army-Navy Basic Finned Missile (ANF) with a caliber of 0.03 m. The analysis will consist in varying the Mach number from M=0.5 comparing the axial force coefficient, normal force slope coefficient and the pitch moment slope coefficient of the Finned Projectile obtained by XFlow with the experimental data. The slope coefficients will be obtained using finite difference techniques in the linear range of the polar curve. The aim of such an analysis is to find out the limiting Mach number value starting from which the effects of high fluid compressibility (related to transonic flow regime) lead the XFlow simulations to differ from the experimental results. This will allow identifying the critical Mach number which limits the validity of the isothermal formulation of XFlow and beyond which a fully compressible solver implementing a coupled momentum-energy equations would be required.Keywords: CFD, computational fluid dynamics, drag, finned projectile, lattice-boltzmann method, LBM, lift, mach, pitch
Procedia PDF Downloads 421706 Leveraging the HDAC Inhibitory Pharmacophore to Construct Deoxyvasicinone Based Tractable Anti-Lung Cancer Agent and pH-Responsive Nanocarrier
Authors: Ram Sharma, Esha Chatterjee, Santosh Kumar Guru, Kunal Nepali
Abstract:
A tractable anti-lung cancer agent was identified via the installation of a Ring C expanded synthetic analogue of the alkaloid vasicinone [7,8,9,10-tetrahydroazepino[2,1-b] quinazolin-12(6H)-one (TAZQ)] as a surface recognition part in the HDAC inhibitory three-component model. Noteworthy to mention that the candidature of TAZQ was deemed suitable for accommodation in HDAC inhibitory pharmacophore as per the results of the fragment recruitment process conducted by our laboratory. TAZQ was pinpointed through the fragment screening program as a synthetically flexible fragment endowed with some moderate cell growth inhibitory activity against the lung cancer cell lines, and it was anticipated that the use of the aforementioned fragment to generate hydroxamic acid functionality (zinc-binding motif) bearing HDAC inhibitors would boost the antitumor efficacy of TAZQ. Consistent with our aim of applying epigenetic targets to the treatment of lung cancer, a strikingly potent anti-lung cancer scaffold (compound 6) was pinpointed through a series of in-vitro experiments. Notably, the compounds manifested a magnificent activity profile against KRAS and EGFR mutant lung cancer cell lines (IC50 = 0.80 - 0.96 µM), and the effects were found to be mediated through preferential HDAC6 inhibition (IC50 = 12.9 nM). In addition to HDAC6 inhibition, the compounds also elicited HDAC1 and HDAC3 inhibitory activity with an IC50 value of 49.9 nM and 68.5 nM, respectively. The HDAC inhibitory ability of compound 6 was also confirmed from the results of the western blot experiment that revealed its potential to decrease the expression levels of HDAC isoforms (HDAC1, HDAC3, and HDAC6). Noteworthy to mention that complete downregulation of the HDAC6 isoform was exerted by compound 6 at 0.5 and 1 µM. Moreover, in another western blot experiment, treatment with hydroxamic acid 6 led to upregulation of H3 acK9 and α-Tubulin acK40 levels, ascertaining its inhibitory activity toward both the class I HDACs and Class II B HDACs. The results of other assays were also encouraging as treatment with compound 6 led to the suppression of the colony formation ability of A549 cells, induction of apoptosis, and increase in autophagic flux. In silico studies led us to rationalize the results of the experimental assay, and some key interactions of compound 6 with the amino acid residues of HDAC isoforms were identified. In light of the impressive activity spectrum of compound 6, a pH-responsive nanocarrier (hyaluronic acid-compound 6 nanoparticles) was prepared. The dialysis bag approach was used for the assessment of the nanoparticles under both normal and acidic circumstances, and the pH-sensitive nature of hyaluronic acid-compound 6 nanoparticles was confirmed. Delightfully, the nanoformulation was devoid of cytotoxicity against the L929 mouse fibroblast cells (normal settings) and exhibited selective cytotoxicity towards the A549 lung cancer cell lines. In a nutshell, compound 6 appears to be a promising adduct, and a detailed investigation of this compound might yield a therapeutic for the treatment of lung cancer.Keywords: HDAC inhibitors, lung cancer, scaffold, hyaluronic acid, nanoparticles
Procedia PDF Downloads 95705 Q Slope Rock Mass Classification and Slope Stability Assessment Methodology Application in Steep Interbedded Sedimentary Rock Slopes for a Motorway Constructed North of Auckland, New Zealand
Authors: Azariah Sosa, Carlos Renedo Sanchez
Abstract:
The development of a new motorway north of Auckland (New Zealand) includes steep rock cuts, from 63 up to 85 degrees, in an interbedded sandstone and siltstone rock mass of the geological unit Waitemata Group (Pakiri Formation), which shows sub-horizontal bedding planes, various sub-vertical joint sets, and a diverse weathering profile. In this kind of rock mass -that can be classified as a weak rock- the definition of the stable maximum geometry is not only governed by discontinuities and defects evident in the rock but is important to also consider the global stability of the rock slope, including (in the analysis) the rock mass characterisation, influence of the groundwater, the geological evolution, and the weathering processes. Depending on the weakness of the rock and the processes suffered, the global stability could, in fact, be a more restricting element than the potential instability of individual blocks through discontinuities. This paper discusses those elements that govern the stability of the rock slopes constructed in a rock formation with favourable bedding and distribution of discontinuities (horizontal and vertical) but with a weak behaviour in terms of global rock mass characterisation. In this context, classifications as Q-Slope and slope stability assessment methodology (SSAM) have been demonstrated as important tools which complement the assessment of the global stability together with the analytical tools related to the wedge-type failures and limit equilibrium methods. The paper focuses on the applicability of these two new empirical classifications to evaluate the slope stability in 18 already excavated rock slopes in the Pakiri formation through comparison between the predicted and observed stability issues and by reviewing the outcome of analytical methods (Rocscience slope stability software suite) compared against the expected stability determined from these rock classifications. This exercise will help validate such findings and correlations arising from the two empirical methods in order to adjust the methods to the nature of this specific kind of rock mass and provide a better understanding of the long-term stability of the slopes studied.Keywords: Pakiri formation, Q-slope, rock slope stability, SSAM, weak rock
Procedia PDF Downloads 208704 Prospective Cohort Study on Sequential Use of Catheter with Misoprostol vs Misoprostol Alone for Second Trimester Medical Abortion
Authors: Hanna Teklu Gebregziabher
Abstract:
Background: A variety of techniques for medical termination of second-trimester pregnancy can be used, but there is no consensus about which is the best. Even though most evidence suggests the combined use of intracervical Foley catheter and vaginal misoprostol is safe, effective, and acceptable method for termination of second-trimester pregnancy, which is comparable to mifepristone-misoprostol combination regimen with lower cost and no additional maternal risks. The use of mifepristone and misoprostol alone with no other procedure is still the most common procedure in different institutions for 2nd-trimester pregnancy. Methods: A cross-sectional comparative prospective study design is employed on women who were admitted for 2nd-trimester medical abortion and medical abortion failed or if there was no change in cervical status after 24 hours of 1st dose of misoprostol. The study was conducted at St. Paulose Hospital Millennium Medical College. A sample of 44 participants in each arm was necessary to give a two-tailed test, a type 1 error of 5%, 80% statistical power, and a 1:1 ratio among groups. Thus, a total of 94 cases, 47 from each arm, were recruited. Data was entered and cleaned by using Epi-info and analyzed using SPSS version 29.0 statistical software and was presented in descriptive and tabular forms. Different variables were cross-tabulated and compared for significant differences and statistical analysis using the chi-square test and independent t-test, to conclude. Result: There was a significant difference between the two groups on induction to expulsion time and number of doses used. The mean ± SD of induction to expulsion time for those used misoprostol alone was 48.09 ± 11.86 and those who used trans-cervical catheter sequentially with misoprostol were 36.7 ±6.772. Conclusion: The use of a trans-cervical Foley catheter in conjunction with misoprostol in a sequential manner is a more effective, safe, and easily accessible procedure. In addition, the cost of utilizing the catheter is less compared to the cost of misoprostol and is readily available. As a good substitute, we advised using Trans-cervical Catether even for medical abortions performed in the second trimester.Keywords: second trimester, medical abortion, catheter, misoprostol
Procedia PDF Downloads 46703 Determinants of Success of University Industry Collaboration in the Science Academic Units at Makerere University
Authors: Mukisa Simon Peter Turker, Etomaru Irene
Abstract:
This study examined factors determining the success of University-Industry Collaboration (UIC) in the science academic units (SAUs) at Makerere University. This was prompted by concerns about weak linkages between industry and the academic units at Makerere University. The study examined institutional, relational, output, and framework factors determining the success of UIC in the science academic units at Makerere University. The study adopted a predictive cross-sectional survey design. Data was collected using a questionnaire survey from 172 academic staff from the six SAUs at Makerere University. Stratified, proportionate, and simple random sampling techniques were used to select the samples. The study used descriptive statistics and linear multiple regression analysis to analyze data. The study findings reveal a coefficient of determination (R-square) of 0.403 at a significance level of 0.000, suggesting that UIC success was 40.3% at a standardized error of estimate of 0.60188. The strength of association between Institutional factors, Relational factors, Output factors, and Framework factors, taking into consideration all interactions among the study variables, was at 64% (R= 0.635). Institutional, Relational, Output and Framework factors accounted for 34% of the variance in the level of UIC success (adjusted R2 = 0.338). The remaining variance of 66% is explained by factors other than Institutional, Relational, Output, and Framework factors. The standardized coefficient statistics revealed that Relational factors (β = 0.454, t = 5.247, p = 0.000) and Framework factors (β = 0.311, t = 3.770, p = 0.000) are the only statistically significant determinants of the success of UIC in the SAU in Makerere University. Output factors (β = 0.082, t =1.096, p = 0.275) and Institutional factors β = 0.023, t = 0.292, p = 0.771) turned out to be statistically insignificant determinants of the success of UIC in the science academic units at Makerere University. The study concludes that Relational Factors and Framework Factors positively and significantly determine the success of UIC, but output factors and institutional factors are not statistically significant determinants of UIC in the SAUs at Makerere University. The study recommends strategies to consolidate Relational and Framework Factors to enhance UIC at Makerere University and further research on the effects of Institutional and Output factors on the success of UIC in universities.Keywords: university-industry collaboration, output factors, relational factors, framework factors, institutional factors
Procedia PDF Downloads 61702 Model for Calculating Traffic Mass and Deceleration Delays Based on Traffic Field Theory
Authors: Liu Canqi, Zeng Junsheng
Abstract:
This study identifies two typical bottlenecks that occur when a vehicle cannot change lanes: car following and car stopping. The ideas of traffic field and traffic mass are presented in this work. When there are other vehicles in front of the target vehicle within a particular distance, a force is created that affects the target vehicle's driving speed. The characteristics of the driver and the vehicle collectively determine the traffic mass; the driving speed of the vehicle and external variables have no bearing on this. From a physical level, this study examines the vehicle's bottleneck when following a car, identifies the outside factors that have an impact on how it drives, takes into account that the vehicle will transform kinetic energy into potential energy during deceleration, and builds a calculation model for traffic mass. The energy-time conversion coefficient is created from an economic standpoint utilizing the social average wage level and the average cost of motor fuel. Vissim simulation program measures the vehicle's deceleration distance and delays under the Wiedemann car-following model. The difference between the measured value of deceleration delay acquired by simulation and the theoretical value calculated by the model is compared using the conversion calculation model of traffic mass and deceleration delay. The experimental data demonstrate that the model is reliable since the error rate between the theoretical calculation value of the deceleration delay obtained by the model and the measured value of simulation results is less than 10%. The article's conclusion is that the traffic field has an impact on moving cars on the road and that physical and socioeconomic factors should be taken into account while studying vehicle-following behavior. The deceleration delay value of a vehicle's driving and traffic mass have a socioeconomic relationship that can be utilized to calculate the energy-time conversion coefficient when dealing with the bottleneck of cars stopping and starting.Keywords: traffic field, social economics, traffic mass, bottleneck, deceleration delay
Procedia PDF Downloads 67701 Effects of Fe Addition and Process Parameters on the Wear and Corrosion Characteristics of Icosahedral Al-Cu-Fe Coatings on Ti-6Al-4V Alloy
Authors: Olawale S. Fatoba, Stephen A. Akinlabi, Esther T. Akinlabi, Rezvan Gharehbaghi
Abstract:
The performance of material surface under wear and corrosion environments cannot be fulfilled by the conventional surface modifications and coatings. Therefore, different industrial sectors need an alternative technique for enhanced surface properties. Titanium and its alloys possess poor tribological properties which limit their use in certain industries. This paper focuses on the effect of hybrid coatings Al-Cu-Fe on a grade five titanium alloy using laser metal deposition (LMD) process. Icosahedral Al-Cu-Fe as quasicrystals is a relatively new class of materials which exhibit unusual atomic structure and useful physical and chemical properties. A 3kW continuous wave ytterbium laser system (YLS) attached to a KUKA robot which controls the movement of the cladding process was utilized for the fabrication of the coatings. The titanium cladded surfaces were investigated for its hardness, corrosion and tribological behaviour at different laser processing conditions. The samples were cut to corrosion coupons, and immersed into 3.65% NaCl solution at 28oC using Electrochemical Impedance Spectroscopy (EIS) and Linear Polarization (LP) techniques. The cross-sectional view of the samples was analysed. It was found that the geometrical properties of the deposits such as width, height and the Heat Affected Zone (HAZ) of each sample remarkably increased with increasing laser power due to the laser-material interaction. It was observed that there are higher number of aluminum and titanium presented in the formation of the composite. The indentation testing reveals that for both scanning speed of 0.8 m/min and 1m/min, the mean hardness value decreases with increasing laser power. The low coefficient of friction, excellent wear resistance and high microhardness were attributed to the formation of hard intermetallic compounds (TiCu, Ti2Cu, Ti3Al, Al3Ti) produced through the in situ metallurgical reactions during the LMD process. The load-bearing capability of the substrate was improved due to the excellent wear resistance of the coatings. The cladded layer showed a uniform crack free surface due to optimized laser process parameters which led to the refinement of the coatings.Keywords: Al-Cu-Fe coating, corrosion, intermetallics, laser metal deposition, Ti-6Al-4V alloy, wear resistance
Procedia PDF Downloads 178700 Variation of Litter Chemistry under Intensified Drought: Consequences on Flammability
Authors: E. Ormeno, C. Gutigny, J. Ruffault, J. Madrigal, M. Guijarro, C. Lecareux, C. Ballini
Abstract:
Mediterranean plant species feature numerous metabolic and morpho-physiological responses crucial to survive under both, typical Mediterranean drought conditions and future aggravated drought expected by climate change. Whether these adaptive responses will, in turn, increase the ecosystem perturbation in terms of fire hazard, is an issue that needs to be addressed. The aim of this study was to test whether recurrent and aggravated drought in the Mediterranean area favors the accumulation of waxes in leaf litter, with an eventual increase of litter flammability. The study was conducted in 2017 in a garrigue in Southern France dominated by Quercus coccifera, where two drought treatments were used: a treatment with recurrent aggravated drought consisting of ten rain exclusion structures which withdraw part of the annual precipitation since January 2012, and a natural drought treatment where Q. coccifera stands are free of such structures and thus grow under natural precipitation. Waxes were extracted with organic solvent and analyzed by GC-MS and litter flammability was assessed through measurements of the ignition delay, flame residence time and flame intensity (flame height) using an epiradiator as well as the heat of combustion using an oxygen bomb calorimeter. Results show that after 5 years of rain restriction, wax content in the cuticle of leaf litter increases significantly compared to shrubs growing under natural precipitation, in accordance with the theoretical knowledge which expects increases of cuticle waxes in green leaves in order to limit water evapotranspiration. Wax concentrations were also linearly and positively correlated to litter flammability, a correlation that lies on the high flammability own to the long-chain alkanes (C25-C31) found in leaf litter waxes. This innovative investigation shows that climate change is likely to favor ecosystem fire hazard through accumulation of highly flammable waxes in litter. It also adds valuable information about the types of metabolites that are associated with increasing litter flammability, since so far, within the leaf metabolic profile, only terpene-like compounds had been related to plant flammability.Keywords: cuticular waxes, drought, flammability, litter
Procedia PDF Downloads 171699 A Survey on Students' Intentions to Dropout and Dropout Causes in Higher Education of Mongolia
Authors: D. Naranchimeg, G. Ulziisaikhan
Abstract:
Student dropout problem has not been recently investigated within the Mongolian higher education. A student dropping out is a personal decision, but it may cause unemployment and other social problems including low quality of life because students who are not completed a degree cannot find better-paid jobs. The research aims to determine percentage of at-risk students, and understand reasons for dropouts and to find a way to predict. The study based on the students of the Mongolian National University of Education including its Arkhangai branch school, National University of Mongolia, Mongolian University of Life Sciences, Mongolian University of Science and Technology, Mongolian National University of Medical Science, Ikh Zasag International University, and Dornod University. We conducted the paper survey by method of random sampling and have surveyed about 100 students per university. The margin of error - 4 %, confidence level -90%, and sample size was 846, but we excluded 56 students from this study. Causes for exclusion were missing data on the questionnaire. The survey has totally 17 questions, 4 of which was demographic questions. The survey shows that 1.4% of the students always thought to dropout whereas 61.8% of them thought sometimes. Also, results of the research suggest that students’ dropouts from university do not have relationships with their sex, marital and social status, and peer and faculty climate, whereas it slightly depends on their chosen specialization. Finally, the paper presents the reasons for dropping out provided by the students. The main two reasons for dropouts are personal reasons related with choosing wrong study program, not liking the course they had chosen (50.38%), and financial difficulties (42.66%). These findings reveal the importance of early prevention of dropout where possible, combined with increased attention to high school students in choosing right for them study program, and targeted financial support for those who are at risk.Keywords: at risk students, dropout, faculty climate, Mongolian universities, peer climate
Procedia PDF Downloads 397698 Socio-cultural Dimensions Inhibiting Female Condom Use by the Female Students: Experiences from a University in Rural South Africa
Authors: Christina Tafadzwa
Abstract:
Global HIV and AIDS trends show that Sub-Saharan Africa is the hardest-hit region, and women are disproportionately affected and infected by HIV. The trend is conspicuous in South Africa, where adolescent girls and young women (AGYW), female university students included, bear the burden of HIV infection. Although the female condom (FC) is the only female-oriented HIV and AIDS technology that provides dual protection against unwanted pregnancy and HIV, its uptake and use remain erratic, especially among the youth and young women in institutions of higher learning. This paper explores empirical evidence from the University of Venda (UniVen), which is in the rural areas of Limpopo Province in South Africa, and also among higher learning institutions experiencing low uptake and use of the FC. A phenomenological approach consisting of in-depth interviews was utilized to collect data from a total of 20 female university students at UniVen who were purposively sampled based on their participation in HIV and AIDS dialogues and campaigns conducted on campus. The findings that were analysed thematically revealed that notions of rurality and sociocultural beliefs surrounding women's sexual and reproductive health are key structural factors that influence the low use and uptake of the FC at the rural university. The evidence thus far revealed that female students are discouraged from collecting or initiating FC because of cultural dictates or prescripts which place the responsibility to collect and initiate condom use on men. Hence the inference that UniVen female students' realities are compounded by notions of rurality and society's patriarchal nature that intersect and limit women's autonomy in matters of sex. Guided by the women empowerment theory, this paper argues that such practices take away UniVen female students' agency to decide on their sexual and reproductive health. The normalisation of socio-cultural and harmful gender practices is also a retrogression in the women's health agenda. The paper recommends a holistic approach that engages traditional and community leaders, particularly men, to unlearn and uproot harmful gender norms and patriarchal elements that hinder the promotion and use of the FC.Keywords: female condom, UniVen, socio-cultural factors, female students, HIV and AIDS
Procedia PDF Downloads 88697 Automatic Content Curation of Visual Heritage
Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz
Abstract:
Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research
Procedia PDF Downloads 184696 A Framework for Automated Nuclear Waste Classification
Authors: Seonaid Hume, Gordon Dobie, Graeme West
Abstract:
Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.Keywords: nuclear decommissioning, radiation detection, object detection, waste classification
Procedia PDF Downloads 200695 Modeling and Simulation of the Structural, Electronic and Magnetic Properties of Fe-Ni Based Nanoalloys
Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz
Abstract:
There is a growing interest in the modeling and simulation of magnetic nanoalloys by various computational methods. Magnetic crystalline/amorphous nanoparticles (NP) are interesting materials from both the applied and fundamental points of view, as their properties differ from those of bulk materials and are essential for advanced applications such as high-performance permanent magnets, high-density magnetic recording media, drug carriers, sensors in biomedical technology, etc. As an important magnetic material, Fe-Ni based nanoalloys have promising applications in the chemical industry (catalysis, battery), aerospace and stealth industry (radar absorbing material, jet engine alloys), magnetic biomedical applications (drug delivery, magnetic resonance imaging, biosensor) and computer hardware industry (data storage). The physical and chemical properties of the nanoalloys depend not only on the particle or crystallite size but also on composition and atomic ordering. Therefore, computer modeling is an essential tool to predict structural, electronic, magnetic and optical behavior at atomistic levels and consequently reduce the time for designing and development of new materials with novel/enhanced properties. Although first-principles quantum mechanical methods provide the most accurate results, they require huge computational effort to solve the Schrodinger equation for only a few tens of atoms. On the other hand, molecular dynamics method with appropriate empirical or semi-empirical inter-atomic potentials can give accurate results for the static and dynamic properties of larger systems in a short span of time. In this study, structural evolutions, magnetic and electronic properties of Fe-Ni based nanoalloys have been studied by using molecular dynamics (MD) method in Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and Density Functional Theory (DFT) in the Vienna Ab initio Simulation Package (VASP). The effects of particle size (in 2-10 nm particle size range) and temperature (300-1500 K) on stability and structural evolutions of amorphous and crystalline Fe-Ni bulk/nanoalloys have been investigated by combining molecular dynamic (MD) simulation method with Embedded Atom Model (EAM). EAM is applicable for the Fe-Ni based bimetallic systems because it considers both the pairwise interatomic interaction potentials and electron densities. Structural evolution of Fe-Ni bulk and nanoparticles (NPs) have been studied by calculation of radial distribution functions (RDF), interatomic distances, coordination number, core-to-surface concentration profiles as well as Voronoi analysis and surface energy dependences on temperature and particle size. Moreover, spin-polarized DFT calculations were performed by using a plane-wave basis set with generalized gradient approximation (GGA) exchange and correlation effects in the VASP-MedeA package to predict magnetic and electronic properties of the Fe-Ni based alloys in bulk and nanostructured phases. The result of theoretical modeling and simulations for the structural evolutions, magnetic and electronic properties of Fe-Ni based nanostructured alloys were compared with experimental and other theoretical results published in the literature.Keywords: density functional theory, embedded atom model, Fe-Ni systems, molecular dynamics, nanoalloys
Procedia PDF Downloads 243694 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder
Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh
Abstract:
In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization
Procedia PDF Downloads 114693 Technology of Electrokinetic Disintegration of Virginia Fanpetals (Sida hermaphrodita) Biomass in a Biogas Production System
Authors: Mirosław Krzemieniewski, Marcin Zieliński, Marcin Dębowski
Abstract:
Electrokinetic disintegration is one of the high-voltage electric methods. The design of systems is exceptionally simple. Biomass flows through a system of pipes with alongside mounted electrodes that generate an electric field. Discharges in the electric field deform cell walls and lead to their successive perforation, thereby making their contents easily available to bacteria. The spark-over occurs between electrode surface and pipe jacket which is the second pole and closes the circuit. The value of voltage ranges from 10 to 100kV. Electrodes are supplied by normal “power grid” monophase electric current (230V, 50Hz). Next, the electric current changes into direct current of 24V in modules serving for particular electrodes, and this current directly feeds the electrodes. The installation is completely safe because the value of generated current does not exceed 250mA and because conductors are grounded. Therefore, there is no risk of electric shock posed to the personnel, even in the case of failure or incorrect connection. Low values of the electric current mean small energy consumption by the electrode which is extremely low – only 35W per electrode – compared to other methods of disintegration. Pipes with electrodes with diameter of DN150 are made of acid-proof steel and connected from both sides with 90º elbows ended with flanges. The available S and U types of pipes enable very convenient fitting with system construction in the existing installations and rooms or facilitate space management in new applications. The system of pipes for electrokinetic disintegration may be installed horizontally, vertically, askew, on special stands or also directly on the wall of a room. The number of pipes and electrodes is determined by operating conditions as well as the quantity of substrate, type of biomass, content of dry matter, method of disintegration (single or circulatory), mounting site etc. The most effective method involves pre-treatment of substrate that may be pumped through the disintegration system on the way to the fermentation tank or recirculated in a buffered intermediate tank (substrate mixing tank). Biomass structure destruction in the process of electrokinetic disintegration causes shortening of substrate retention time in the tank and acceleration of biogas production. A significant intensification of the fermentation process was observed in the systems operating in the technical scale, with the greatest increase in biogas production reaching 18%. The secondary, but highly significant for the energetic balance, effect is a tangible decrease of energy input by agitators in tanks. It is due to reduced viscosity of the biomass after disintegration, and may result in energy savings reaching even 20-30% of the earlier noted consumption. Other observed phenomena include reduction in the layer of surface scum, reduced sewage capability for foaming and successive decrease in the quantity of bottom sludge banks. Considering the above, the system for electrokinetic disintegration seems a very interesting and valuable solutions meeting the offer of specialist equipment for the processing of plant biomass, including Virginia fanpetals, before the process of methane fermentation.Keywords: electrokinetic disintegration, biomass, biogas production, fermentation, Virginia fanpetals
Procedia PDF Downloads 377692 Identification and Optimisation of South Africa's Basic Access Road Network
Authors: Diogo Prosdocimi, Don Ross, Matthew Townshend
Abstract:
Road authorities are mandated within limited budgets to both deliver improved access to basic services and facilitate economic growth. This responsibility is further complicated if maintenance backlogs and funding shortfalls exist, as evident in many countries including South Africa. These conditions require authorities to make difficult prioritisation decisions, with the effect that Road Asset Management Systems with a one-dimensional focus on traffic volumes may overlook the maintenance of low-volume roads that provide isolated communities with vital access to basic services. Given these challenges, this paper overlays the full South African road network with geo-referenced information for population, primary and secondary schools, and healthcare facilities to identify the network of connective roads between communities and basic service centres. This connective network is then rationalised according to the Gross Value Added and number of jobs per mesozone, administrative and functional road classifications, speed limit, and road length, location, and name to estimate the Basic Access Road Network. A two-step floating catchment area (2SFCA) method, capturing a weighted assessment of drive-time to service centres and the ratio of people within a catchment area to teachers and healthcare workers, is subsequently applied to generate a Multivariate Road Index. This Index is used to assign higher maintenance priority to roads within the Basic Access Road Network that provide more people with better access to services. The relatively limited incidence of Basic Access Roads indicates that authorities could maintain the entire estimated network without exhausting the available road budget before practical economic considerations get any purchase. Despite this fact, a final case study modelling exercise is performed for the Namakwa District Municipality to demonstrate the extent to which optimal relocation of schools and healthcare facilities could minimise the Basic Access Road Network and thereby release budget for investment in roads that best promote GDP growth.Keywords: basic access roads, multivariate road index, road prioritisation, two-step floating catchment area method
Procedia PDF Downloads 231