Search results for: multi features
6111 Supergrid Modeling and Operation and Control of Multi Terminal DC Grids for the Deployment of a Meshed HVDC Grid in South Asia
Authors: Farhan Beg, Raymond Moberly
Abstract:
The Indian subcontinent is facing a massive challenge with regards to energy security in member countries, to provide reliable electricity to facilitate development across various sectors of the economy and consequently achieve the developmental targets. The instability of the current precarious situation is observable in the frequent system failures and blackouts. The deployment of interconnected electricity ‘Supergrid’ designed to carry huge quanta of power across the Indian sub-continent is proposed in this paper. Besides enabling energy security in the subcontinent, it will also provide a platform for Renewable Energy Sources (RES) integration. This paper assesses the need and conditions for a Supergrid deployment and consequently proposes a meshed topology based on Voltage Source High Voltage Direct Current (VSC-HVDC) converters for the Supergrid modeling. Various control schemes for the control of voltage and power are utilized for the regulation of the network parameters. A 3 terminal Multi Terminal Direct Current (MTDC) network is used for the simulations.Keywords: super grid, wind and solar energy, high voltage direct current, electricity management, load flow analysis
Procedia PDF Downloads 4286110 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 1396109 Types of Neurons in the Spinal Trigeminal Nucleus of the Camel Brain: Golgi Study
Authors: Qasim A. El Dwairi, Saleh M. Banihani, Ayat S. Banihani, Ziad M. Bataineh
Abstract:
Neurons in the spinal trigeminal nucleus of the camel were studied by Golgi impregnation. Neurons were classified based on differences in size and shape of their cell bodies, density of their dendritic trees, morphology and distribution of their appendages. In the spinal trigeminal nucleus of the camel, at least twelve types of neurons were identified. These neurons include, stalked, islets, octubus-like, lobulated, boat-like, pyramidal, multipolar, round, oval and elongated neurons. They have large number of different forms of appendages not only for their dendrites but also for their cell bodies. Neurons with unique large dilatations especially at their dendritic branching points were found. The morphological features of these neurons were described and compared with their counterparts in other species. Finding of large number of neuronal types with different size and shapes and large number of different forms of appendages for cell bodies and dendrites together with the presence of cells with unique features such as large dilated parts for dendrites may indicate to a very complex information processing for pain and temperature at the level of the spinal trigeminal nucleus in the camel that traditionally live in a very hard environment (the desert).Keywords: camel, golgi, neurons , spinal trigeminal nucleus
Procedia PDF Downloads 3416108 Inhibition of Variant Surface Glycoproteins Translation to Define the Essential Features of the Variant Surface Glycoprotein in Trypanosoma brucei
Authors: Isobel Hambleton, Mark Carrington
Abstract:
Trypanosoma brucei, the causal agent of a range of diseases in humans and livestock, evades the mammalian immune system through a population survival strategy based on the expression of a series of antigenically distinct variant surface glycoproteins (VSGs). RNAi mediated knockdown of the active VSG gene triggers a precytokinesis cell cycle arrest. To determine whether this phenotype is the result of reduced VSG transcript or depleted VSG protein, we used morpholino antisense oligonucleotides to block translation of VSG mRNA. The same precytokinesis cell cycle arrest was observed, suggesting that VSG protein abundance is monitored closely throughout the cell cycle. An inducible expression system has been developed to test various GPI-anchored proteins for their ability to rescue this cell cycle arrest. This system has been used to demonstrate that wild-type VSG expressed from a T7 promoter rescues this phenotype. This indicates that VSG expression from one of the specialised bloodstream expression sites (BES) is not essential for cell division. The same approach has been used to define the minimum essential features of a VSG necessary for function.Keywords: bloodstream expression site, morpholino, precytokinesis cell cycle arrest, variant surface glycoprotein
Procedia PDF Downloads 1506107 Content Based Video Retrieval System Using Principal Object Analysis
Authors: Van Thinh Bui, Anh Tuan Tran, Quoc Viet Ngo, The Bao Pham
Abstract:
Video retrieval is a searching problem on videos or clips based on content in which they are relatively close to an input image or video. The application of this retrieval consists of selecting video in a folder or recognizing a human in security camera. However, some recent approaches have been in challenging problem due to the diversity of video types, frame transitions and camera positions. Besides, that an appropriate measures is selected for the problem is a question. In order to overcome all obstacles, we propose a content-based video retrieval system in some main steps resulting in a good performance. From a main video, we process extracting keyframes and principal objects using Segmentation of Aggregating Superpixels (SAS) algorithm. After that, Speeded Up Robust Features (SURF) are selected from those principal objects. Then, the model “Bag-of-words” in accompanied by SVM classification are applied to obtain the retrieval result. Our system is performed on over 300 videos in diversity from music, history, movie, sports, and natural scene to TV program show. The performance is evaluated in promising comparison to the other approaches.Keywords: video retrieval, principal objects, keyframe, segmentation of aggregating superpixels, speeded up robust features, bag-of-words, SVM
Procedia PDF Downloads 3016106 Application of Multilinear Regression Analysis for Prediction of Synthetic Shear Wave Velocity Logs in Upper Assam Basin
Authors: Triveni Gogoi, Rima Chatterjee
Abstract:
Shear wave velocity (Vs) estimation is an important approach in the seismic exploration and characterization of a hydrocarbon reservoir. There are varying methods for prediction of S-wave velocity, if recorded S-wave log is not available. But all the available methods for Vs prediction are empirical mathematical models. Shear wave velocity can be estimated using P-wave velocity by applying Castagna’s equation, which is the most common approach. The constants used in Castagna’s equation vary for different lithologies and geological set-ups. In this study, multiple regression analysis has been used for estimation of S-wave velocity. The EMERGE module from Hampson-Russel software has been used here for generation of S-wave log. Both single attribute and multi attributes analysis have been carried out for generation of synthetic S-wave log in Upper Assam basin. Upper Assam basin situated in North Eastern India is one of the most important petroleum provinces of India. The present study was carried out using four wells of the study area. Out of these wells, S-wave velocity was available for three wells. The main objective of the present study is a prediction of shear wave velocities for wells where S-wave velocity information is not available. The three wells having S-wave velocity were first used to test the reliability of the method and the generated S-wave log was compared with actual S-wave log. Single attribute analysis has been carried out for these three wells within the depth range 1700-2100m, which corresponds to Barail group of Oligocene age. The Barail Group is the main target zone in this study, which is the primary producing reservoir of the basin. A system generated list of attributes with varying degrees of correlation appeared and the attribute with the highest correlation was concerned for the single attribute analysis. Crossplot between the attributes shows the variation of points from line of best fit. The final result of the analysis was compared with the available S-wave log, which shows a good visual fit with a correlation of 72%. Next multi-attribute analysis has been carried out for the same data using all the wells within the same analysis window. A high correlation of 85% has been observed between the output log from the analysis and the recorded S-wave. The almost perfect fit between the synthetic S-wave and the recorded S-wave log validates the reliability of the method. For further authentication, the generated S-wave data from the wells have been tied to the seismic and correlated them. Synthetic share wave log has been generated for the well M2 where S-wave is not available and it shows a good correlation with the seismic. Neutron porosity, density, AI and P-wave velocity are proved to be the most significant variables in this statistical method for S-wave generation. Multilinear regression method thus can be considered as a reliable technique for generation of shear wave velocity log in this study.Keywords: Castagna's equation, multi linear regression, multi attribute analysis, shear wave logs
Procedia PDF Downloads 2296105 A Comprehensive Methodology for Voice Segmentation of Large Sets of Speech Files Recorded in Naturalistic Environments
Authors: Ana Londral, Burcu Demiray, Marcus Cheetham
Abstract:
Speech recording is a methodology used in many different studies related to cognitive and behaviour research. Modern advances in digital equipment brought the possibility of continuously recording hours of speech in naturalistic environments and building rich sets of sound files. Speech analysis can then extract from these files multiple features for different scopes of research in Language and Communication. However, tools for analysing a large set of sound files and automatically extract relevant features from these files are often inaccessible to researchers that are not familiar with programming languages. Manual analysis is a common alternative, with a high time and efficiency cost. In the analysis of long sound files, the first step is the voice segmentation, i.e. to detect and label segments containing speech. We present a comprehensive methodology aiming to support researchers on voice segmentation, as the first step for data analysis of a big set of sound files. Praat, an open source software, is suggested as a tool to run a voice detection algorithm, label segments and files and extract other quantitative features on a structure of folders containing a large number of sound files. We present the validation of our methodology with a set of 5000 sound files that were collected in the daily life of a group of voluntary participants with age over 65. A smartphone device was used to collect sound using the Electronically Activated Recorder (EAR): an app programmed to record 30-second sound samples that were randomly distributed throughout the day. Results demonstrated that automatic segmentation and labelling of files containing speech segments was 74% faster when compared to a manual analysis performed with two independent coders. Furthermore, the methodology presented allows manual adjustments of voiced segments with visualisation of the sound signal and the automatic extraction of quantitative information on speech. In conclusion, we propose a comprehensive methodology for voice segmentation, to be used by researchers that have to work with large sets of sound files and are not familiar with programming tools.Keywords: automatic speech analysis, behavior analysis, naturalistic environments, voice segmentation
Procedia PDF Downloads 2816104 Unveiling Karst Features in Miocene Carbonate Reservoirs of Central Luconia-Malaysia: Case Study of F23 Field's Karstification
Authors: Abd Al-Salam Al-Masgari, Haylay Tsegab, Ismailalwali Babikir, Monera A. Shoieb
Abstract:
We present a study of Malaysia's Central Luconia region, which is an essential deposit of Miocene carbonate reservoirs. This study aims to identify and map areas of selected carbonate platforms, develop high-resolution statistical karst models, and generate comprehensive karst geobody models for selected carbonate fields. This study uses seismic characterization and advanced geophysical surveys to identify karst signatures in Miocene carbonate reservoirs. The results highlight the use of variance, RMS, RGB colour blending, and 3D visualization Prop seismic sequence stratigraphy seismic attributes to visualize the karstified areas across the F23 field of Central Luconia. The offshore karst model serves as a powerful visualization tool to reveal the karstization of carbonate sediments of interest. The results of this study contribute to a better understanding of the karst distribution of Miocene carbonate reservoirs in Central Luconia, which are essential for hydrocarbon exploration and production. This is because these features significantly impact the reservoir geometry, flow path and characteristics.Keywords: karst, central Luconia, seismic attributes, Miocene carbonate build-ups
Procedia PDF Downloads 716103 Multi-Criteria Decision-Making Evaluations for Oily Waste Management of Marine Oil Spill
Authors: Naznin Sultana Daisy, Mohammad Hesam Hafezi, Lei Liu
Abstract:
Nowadays, oily solid waste management has become an important issue for many countries due to frequent oil spill accidents and the increase of industrial oily wastewater. The historical oil spill data show that marine oil spills that affect the shoreline can, in extreme cases, produce up to 30 or 40 times more waste than the volume of oil initially released. Hence, responsive authorities aim to develop the most effective oily waste management solution in a timely manner to manage and minimize the waste generated. In this study initially, we tried to develop the roadmap of oily waste management for three-tiered spill scenarios for Atlantic Canada. For that purpose, three oily waste disposal scenarios are evaluated via six criteria which are determined according to the opinions of the experts from the field. Consequently, through sustainable response strategies, the most appropriate and feasible scenario is determined. The results of this study will assist to develop an integrated oily waste management system for identifying the optimal waste-generation-allocation-disposal schemes and generating the optimal management alternatives based on the holistic consideration of environmental, technological, economic, social, and regulatory factors.Keywords: oily waste management, marine oil spill, multi-criteria decision making, oil spill response
Procedia PDF Downloads 1376102 Effective Infection Control Measures to Prevent Transmission of Multi-Drug Resistant Organisms from Burn Transfer Cases in a Regional Burn Centre
Authors: Si Jack Chong, Chew Theng Yap, Wan Loong James Mok
Abstract:
Introduction: Regional burn centres face the spectra of introduced multi-drug resistant organisms (MDRO) from transfer patients resident in MDRO endemic countries. MDRO can cause severe nosocomial infection, which in massive burn patients, will lead to greater morbidity and mortality and strain the institution financially. We aim to highlight 4 key measures that have effectively prevented transmission of imported MDRO. Methods: A case of Candida auris (C. auris) from a massive burn patient transferred from an MDRO endemic country is used to illustrate the measures. C. auris is a globally emerging multi-drug resistant fungal pathogen causing nosocomial transmission. Results: Infection control measures used to mitigate the risk of outbreak from transfer cases are: (1) Multidisciplinary team approach involving Infection Control and Infectious Disease specialists early to ensure appropriate antibiotics use and implementation of barrier measures, (2) aseptic procedures for dressing change with strict isolation and donning of personal protective equipment in the ward, (3) early screening of massive burn patient from MDRO endemic region, (4) hydrogen peroxide vaporization terminal cleaning for operating theatres and rooms. Conclusion: The prevalence of air travel and international transfer to regional burn centres will need effective infection control measures to reduce the risk of transmission from imported massive burn patients. In our centre, we have effectively implemented 4 measures which have reduced the risks of local contamination. We share a recent case report to illustrate successful management of a potential MDRO outbreak resulting from transfer of massive burn patient resident in an MDRO endemic area.Keywords: burns, burn unit, cross infection, infection control
Procedia PDF Downloads 1506101 Multi-Objective Optimization of Run-of-River Small-Hydropower Plants Considering Both Investment Cost and Annual Energy Generation
Authors: Amèdédjihundé H. J. Hounnou, Frédéric Dubas, François-Xavier Fifatin, Didier Chamagne, Antoine Vianou
Abstract:
This paper presents the techno-economic evaluation of run-of-river small-hydropower plants. In this regard, a multi-objective optimization procedure is proposed for the optimal sizing of the hydropower plants, and NSGAII is employed as the optimization algorithm. Annual generated energy and investment cost are considered as the objective functions, and number of generator units (n) and nominal turbine flow rate (QT) constitute the decision variables. Site of Yeripao in Benin is considered as the case study. We have categorized the river of this site using its environmental characteristics: gross head, and first quartile, median, third quartile and mean of flow. Effects of each decision variable on the objective functions are analysed. The results gave Pareto Front which represents the trade-offs between annual energy generation and the investment cost of hydropower plants, as well as the recommended optimal solutions. We noted that with the increase of the annual energy generation, the investment cost rises. Thus, maximizing energy generation is contradictory with minimizing the investment cost. Moreover, we have noted that the solutions of Pareto Front are grouped according to the number of generator units (n). The results also illustrate that the costs per kWh are grouped according to the n and rise with the increase of the nominal turbine flow rate. The lowest investment costs per kWh are obtained for n equal to one and are between 0.065 and 0.180 €/kWh. Following the values of n (equal to 1, 2, 3 or 4), the investment cost and investment cost per kWh increase almost linearly with increasing the nominal turbine flowrate while annual generated. Energy increases logarithmically with increasing of the nominal turbine flowrate. This study made for the Yeripao river can be applied to other rivers with their own characteristics.Keywords: hydropower plant, investment cost, multi-objective optimization, number of generator units
Procedia PDF Downloads 1576100 Automated Prediction of HIV-associated Cervical Cancer Patients Using Data Mining Techniques for Survival Analysis
Authors: O. J. Akinsola, Yinan Zheng, Rose Anorlu, F. T. Ogunsola, Lifang Hou, Robert Leo-Murphy
Abstract:
Cervical Cancer (CC) is the 2nd most common cancer among women living in low and middle-income countries, with no associated symptoms during formative periods. With the advancement and innovative medical research, there are numerous preventive measures being utilized, but the incidence of cervical cancer cannot be truncated with the application of only screening tests. The mortality associated with this invasive cervical cancer can be nipped in the bud through the important role of early-stage detection. This study research selected an array of different top features selection techniques which was aimed at developing a model that could validly diagnose the risk factors of cervical cancer. A retrospective clinic-based cohort study was conducted on 178 HIV-associated cervical cancer patients in Lagos University teaching Hospital, Nigeria (U54 data repository) in April 2022. The outcome measure was the automated prediction of the HIV-associated cervical cancer cases, while the predictor variables include: demographic information, reproductive history, birth control, sexual history, cervical cancer screening history for invasive cervical cancer. The proposed technique was assessed with R and Python programming software to produce the model by utilizing the classification algorithms for the detection and diagnosis of cervical cancer disease. Four machine learning classification algorithms used are: the machine learning model was split into training and testing dataset into ratio 80:20. The numerical features were also standardized while hyperparameter tuning was carried out on the machine learning to train and test the data. Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), and K-Nearest Neighbor (KNN). Some fitting features were selected for the detection and diagnosis of cervical cancer diseases from selected characteristics in the dataset using the contribution of various selection methods for the classification cervical cancer into healthy or diseased status. The mean age of patients was 49.7±12.1 years, mean age at pregnancy was 23.3±5.5 years, mean age at first sexual experience was 19.4±3.2 years, while the mean BMI was 27.1±5.6 kg/m2. A larger percentage of the patients are Married (62.9%), while most of them have at least two sexual partners (72.5%). Age of patients (OR=1.065, p<0.001**), marital status (OR=0.375, p=0.011**), number of pregnancy live-births (OR=1.317, p=0.007**), and use of birth control pills (OR=0.291, p=0.015**) were found to be significantly associated with HIV-associated cervical cancer. On top ten 10 features (variables) considered in the analysis, RF claims the overall model performance, which include: accuracy of (72.0%), the precision of (84.6%), a recall of (84.6%) and F1-score of (74.0%) while LR has: an accuracy of (74.0%), precision of (70.0%), recall of (70.0%) and F1-score of (70.0%). The RF model identified 10 features predictive of developing cervical cancer. The age of patients was considered as the most important risk factor, followed by the number of pregnancy livebirths, marital status, and use of birth control pills, The study shows that data mining techniques could be used to identify women living with HIV at high risk of developing cervical cancer in Nigeria and other sub-Saharan African countries.Keywords: associated cervical cancer, data mining, random forest, logistic regression
Procedia PDF Downloads 836099 Multiscale Computational Approach to Enhance the Understanding, Design and Development of CO₂ Catalytic Conversion Technologies
Authors: Agnieszka S. Dzielendziak, Lindsay-Marie Armstrong, Matthew E. Potter, Robert Raja, Pier J. A. Sazio
Abstract:
Reducing carbon dioxide, CO₂, is one of the greatest global challenges. Conversion of CO₂ for utilisation across synthetic fuel, pharmaceutical, and agrochemical industries offers a promising option, yet requires significant research to understanding the complex multiscale processes involved. To experimentally understand and optimize such processes at that catalytic sites and exploring the impact of the process at reactor scale, is too expensive. Computational methods offer significant insight and flexibility but require a more detailed multi-scale approach which is a significant challenge in itself. This work introduces a computational approach which incorporates detailed catalytic models, taken from experimental investigations, into a larger-scale computational flow dynamics framework. The reactor-scale species transport approach is modified near the catalytic walls to determine the influence of catalytic clustering regions. This coupling approach enables more accurate modelling of velocity, pressures, temperatures, species concentrations and near-wall surface characteristics which will ultimately enable the impact of overall reactor design on chemical conversion performance.Keywords: catalysis, CCU, CO₂, multi-scale model
Procedia PDF Downloads 2536098 Critical Factors for Successful Adoption of Land Value Capture Mechanisms – An Exploratory Study Applied to Indian Metro Rail Context
Authors: Anjula Negi, Sanjay Gupta
Abstract:
Paradigms studied inform inadequacies of financial resources, be it to finance metro rails for construction or to meet operational revenues or to derive profits in the long term. Funding sustainability is far and wide for much-needed public transport modes, like urban rail or metro rails, to be successfully operated. India embarks upon a sustainable transport journey and has proposed metro rail systems countrywide. As an emerging economic leader, its fiscal constraints are paramount, and the land value capture (LVC) mechanism provides necessary support and innovation toward development. India’s metro rail policy promotes multiple methods of financing, including private-sector investments and public-private-partnership. The critical question that remains to be addressed is what factors can make such mechanisms work. Globally, urban rail is a revolution noted by many researchers as future mobility. Researchers in this study deep dive by way of literature review and empirical assessments into factors that can lead to the adoption of LVC mechanisms. It is understood that the adoption of LVC methods is in the nascent stages in India. Research posits numerous challenges being faced by metro rail agencies in raising funding and for incremental value capture. A few issues pertaining to land-based financing, inter alia: are long-term financing, inter-institutional coordination, economic/ market suitability, dedicated metro funds, land ownership issues, piecemeal approach to real estate development, property development legal frameworks, etc. The question under probe is what are the parameters that can lead to success in the adoption of land value capture (LVC) as a financing mechanism. This research provides insights into key parameters crucial to the adoption of LVC in the context of Indian metro rails. Researchers have studied current forms of LVC mechanisms at various metro rails of the country. This study is significant as little research is available on the adoption of LVC, which is applicable to the Indian context. Transit agencies, State Government, Urban Local Bodies, Policy makers and think tanks, Academia, Developers, Funders, Researchers and Multi-lateral agencies may benefit from this research to take ahead LVC mechanisms in practice. The study deems it imperative to explore and understand key parameters that impact the adoption of LVC. Extensive literature review and ratification by experts working in the metro rails arena were undertaken to arrive at parameters for the study. Stakeholder consultations in the exploratory factor analysis (EFA) process were undertaken for principal component extraction. 43 seasoned and specialized experts participated in a semi-structured questionnaire to scale the maximum likelihood on each parameter, represented by various types of stakeholders. Empirical data was collected on chosen eighteen parameters, and significant correlation was extracted for output descriptives and inferential statistics. Study findings reveal these principal components as institutional governance framework, spatial planning features, legal frameworks, funding sustainability features and fiscal policy measures. In particular, funding sustainability features highlight sub-variables of beneficiaries to pay and use of multiple revenue options towards success in LVC adoption. Researchers recommend incorporation of these variables during early stage in design and project structuring for success in adoption of LVC. In turn leading to improvements in revenue sustainability of a public transport asset and help in undertaking informed transport policy decisions.Keywords: Exploratory factor analysis, land value capture mechanism, financing metro rails, revenue sustainability, transport policy
Procedia PDF Downloads 816097 Electron Beam Melting Process Parameter Optimization Using Multi Objective Reinforcement Learning
Authors: Michael A. Sprayberry, Vincent C. Paquit
Abstract:
Process parameter optimization in metal powder bed electron beam melting (MPBEBM) is crucial to ensure the technology's repeatability, control, and industry-continued adoption. Despite continued efforts to address the challenges via the traditional design of experiments and process mapping techniques, there needs to be more successful in an on-the-fly optimization framework that can be adapted to MPBEBM systems. Additionally, data-intensive physics-based modeling and simulation methods are difficult to support by a metal AM alloy or system due to cost restrictions. To mitigate the challenge of resource-intensive experiments and models, this paper introduces a Multi-Objective Reinforcement Learning (MORL) methodology defined as an optimization problem for MPBEBM. An off-policy MORL framework based on policy gradient is proposed to discover optimal sets of beam power (P) – beam velocity (v) combinations to maintain a steady-state melt pool depth and phase transformation. For this, an experimentally validated Eagar-Tsai melt pool model is used to simulate the MPBEBM environment, where the beam acts as the agent across the P – v space to maximize returns for the uncertain powder bed environment producing a melt pool and phase transformation closer to the optimum. The culmination of the training process yields a set of process parameters {power, speed, hatch spacing, layer depth, and preheat} where the state (P,v) with the highest returns corresponds to a refined process parameter mapping. The resultant objects and mapping of returns to the P-v space show convergence with experimental observations. The framework, therefore, provides a model-free multi-objective approach to discovery without the need for trial-and-error experiments.Keywords: additive manufacturing, metal powder bed fusion, reinforcement learning, process parameter optimization
Procedia PDF Downloads 906096 High-Resolution Computed Tomography Imaging Features during Pandemic 'COVID-19'
Authors: Sahar Heidary, Ramin Ghasemi Shayan
Abstract:
By the development of new coronavirus (2019-nCoV) pneumonia, chest high-resolution computed tomography (HRCT) has been one of the main investigative implements. To realize timely and truthful diagnostics, defining the radiological features of the infection is of excessive value. The purpose of this impression was to consider the imaging demonstrations of early-stage coronavirus disease 2019 (COVID-19) and to run an imaging base for a primary finding of supposed cases and stratified interference. The right prophetic rate of HRCT was 85%, sensitivity was 73% for all patients. Total accuracy was 68%. There was no important change in these values for symptomatic and asymptomatic persons. These consequences were besides free of the period of X-ray from the beginning of signs or interaction. Therefore, we suggest that HRCT is a brilliant attachment for early identification of COVID-19 pneumonia in both symptomatic and asymptomatic individuals in adding to the role of predictive gauge for COVID-19 pneumonia. Patients experienced non-contrast HRCT chest checkups and images were restored in a thin 1.25 mm lung window. Images were estimated for the existence of lung scratches & a CT severity notch was allocated separately for each patient based on the number of lung lobes convoluted.Keywords: COVID-19, radiology, respiratory diseases, HRCT
Procedia PDF Downloads 1426095 Winter – Not Spring - Climate Drives Annual Adult Survival in Common Passerines: A Country-Wide, Multi-Species Modeling Exercise
Authors: Manon Ghislain, Timothée Bonnet, Olivier Gimenez, Olivier Dehorter, Pierre-Yves Henry
Abstract:
Climatic fluctuations affect the demography of animal populations, generating changes in population size, phenology, distribution and community assemblages. However, very few studies have identified the underlying demographic processes. For short-lived species, like common passerine birds, are these changes generated by changes in adult survival or in fecundity and recruitment? This study tests for an effect of annual climatic conditions (spring and winter) on annual, local adult survival at very large spatial (a country, 252 sites), temporal (25 years) and biological (25 species) scales. The Constant Effort Site ringing has allowed the collection of capture - mark - recapture data for 100 000 adult individuals since 1989, over metropolitan France, thus documenting annual, local survival rates of the most common passerine birds. We specifically developed a set of multi-year, multi-species, multi-site Bayesian models describing variations in local survival and recapture probabilities. This method allows for a statistically powerful hierarchical assessment (global versus species-specific) of the effects of climate variables on survival. A major part of between-year variations in survival rate was common to all species (74% of between-year variance), whereas only 26% of temporal variation was species-specific. Although changing spring climate is commonly invoked as a cause of population size fluctuations, spring climatic anomalies (mean precipitation or temperature for March-August) do not impact adult survival: only 1% of between-year variation of species survival is explained by spring climatic anomalies. However, for sedentary birds, winter climatic anomalies (North Atlantic Oscillation) had a significant, quadratic effect on adult survival, birds surviving less during intermediate years than during more extreme years. For migratory birds, we do not detect an effect of winter climatic anomalies (Sahel Rainfall). We will analyze the life history traits (migration, habitat, thermal range) that could explain a different sensitivity of species to winter climate anomalies. Overall, we conclude that changes in population sizes for passerine birds are unlikely to be the consequences of climate-driven mortality (or emigration) in spring but could be induced by other demographic parameters, like fecundity.Keywords: Bayesian approach, capture-recapture, climate anomaly, constant effort sites scheme, passerine, seasons, survival
Procedia PDF Downloads 3036094 The Model Establishment and Analysis of TRACE/FRAPTRAN for Chinshan Nuclear Power Plant Spent Fuel Pool
Authors: J. R. Wang, H. T. Lin, Y. S. Tseng, W. Y. Li, H. C. Chen, S. W. Chen, C. Shih
Abstract:
TRACE is developed by U.S. NRC for the nuclear power plants (NPPs) safety analysis. We focus on the establishment and application of TRACE/FRAPTRAN/SNAP models for Chinshan NPP (BWR/4) spent fuel pool in this research. The geometry is 12.17 m × 7.87 m × 11.61 m for the spent fuel pool. In this study, there are three TRACE/SNAP models: one-channel, two-channel, and multi-channel TRACE/SNAP model. Additionally, the cooling system failure of the spent fuel pool was simulated and analyzed by using the above models. According to the analysis results, the peak cladding temperature response was more accurate in the multi-channel TRACE/SNAP model. The results depicted that the uncovered of the fuels occurred at 2.7 day after the cooling system failed. In order to estimate the detailed fuel rods performance, FRAPTRAN code was used in this research. According to the results of FRAPTRAN, the highest cladding temperature located on the node 21 of the fuel rod (the highest node at node 23) and the cladding burst roughly after 3.7 day.Keywords: TRACE, FRAPTRAN, BWR, spent fuel pool
Procedia PDF Downloads 3576093 The Development and Future of Hong Kong Typography
Authors: Amic G. Ho
Abstract:
Language usage and typography in Hong Kong are unique, as can be seen clearly on the streets of the city. In contrast to many other parts of the world, where there is only one language, in Hong Kong many signs and billboards display two languages: Chinese and English. The language usage on signage, fonts and types used, and the designs in magazines and advertisements all demonstrate the unique features of Hong Kong typographic design, which reflect the multicultural nature of Hong Kong society. This study is the first step in investigating the nature and development of Hong Kong typography. The preliminary research explored how the historical development of Hong Kong is reflected in its unique typography. Following a review of historical development, a quantitative study was designed: Local Hong Kong participants were invited to provide input on what makes the Hong Kong typographic style unique. Their input was collected and analyzed. This provided us with information about the characteristic criteria and features of Hong Kong typography, as recognized by the local people. The most significant typographic designs in Hong Kong were then investigated and the influence of Chinese and other cultures on Hong Kong typography was assessed. The research results provide an indication to local designers on how they can strengthen local design outcomes and promote the values and culture of their mother town.Keywords: typography, Hong Kong, historical developments, multiple cultures
Procedia PDF Downloads 5146092 Enhanced Photoelectrochemical Water Splitting Coupled with Pharmaceutical Pollutants Degradation on Zr:BiVO4 Photoanodes by Synergetic Catalytic Activity of NiFeOOH Nanostructures
Authors: Mabrook Saleh Amera, Prabhakarn Arunachalama, Maged N. Shaddadb, Abdulhadi Al-Qadia
Abstract:
Global energy crises and water pollution have negatively impacted sustainable development in recent years. It is most promising to use Bismuth vanadate (BiVO4) as an electrode for photoelectrocatalytic (PEC) oxidation of water and pollution degradation. However, BiVO4 anodes suffer from poor charge separation and slow water oxidation. In this paper, a Zr:BiVO4/NiFeOOH heterojunction was successfully prepared by electrodeposition and photoelectrochemical transformation process. The method resulted in a notable 5-fold improvement in photocurrent features (1.27 mAcm−2 at 1.23 VRHE) and a lower onset potential of 0.6 VRHE. Photoanodes with high photocatalytic features and high photocorrosion resistance may be attributed their high conformity and amorphous nature of the coating. In this study, PEC was compared to electrocatalysis (EC), and the effect of bias potential on PEC degradation was discussed for tetracycline (TCH), riboflavin, and streptomycin. In PEC, TCH was degraded in the most efficient way (96 %) by Zr:BiVO4/NiFeOOH, three times larger than Zr:BiVO4 and EC (55 %). Thus, this study offers a potential solution for oxidizing PEC water and treating water pollution.Keywords: photoelectrochemical, water splitting, pharmaceutical pollutants degradation, photoanodes, cocatalyst
Procedia PDF Downloads 546091 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 896090 The Determinants of Country Corruption: Unobserved Heterogeneity and Individual Choice- An empirical Application with Finite Mixture Models
Authors: Alessandra Marcelletti, Giovanni Trovato
Abstract:
Corruption in public offices is found to be the reflection of country-specific features, however, the exact magnitude and the statistical significance of its determinants effect has not yet been identified. The paper aims to propose an estimation method to measure the impact of country fundamentals on corruption, showing that covariates could differently affect the extent of corruption across countries. Thus, we exploit a model able to take into account different factors affecting the incentive to ask or to be asked for a bribe, coherently with the use of the Corruption Perception Index. We assume that discordant results achieved in literature may be explained by omitted hidden factors affecting the agents' decision process. Moreover, assuming homogeneous covariates effect may lead to unreliable conclusions since the country-specific environment is not accounted for. We apply a Finite Mixture Model with concomitant variables to 129 countries from 1995 to 2006, accounting for the impact of the initial conditions in the socio-economic structure on the corruption patterns. Our findings confirm the hypothesis of the decision process of accepting or asking for a bribe varies with specific country fundamental features.Keywords: Corruption, Finite Mixture Models, Concomitant Variables, Countries Classification
Procedia PDF Downloads 2646089 Immobilization of Cobalt Ions on F-Multi-Wall Carbon Nanotubes-Chitosan Thin Film: Preparation and Application for Paracetamol Detection
Authors: Shamima Akhter, Samira Bagheri, M. Shalauddin, Wan Jefrey Basirun
Abstract:
In the present study, a nanocomposite of f-MWCNTs-Chitosan was prepared by the immobilization of Co(II) transition metal through self-assembly method and used for the simultaneous voltammetric determination of paracetamol (PA). The composite material was characterized by field emission scanning electron microscopy (FESEM) and energy dispersive X-Ray analysis (EDX). The electroactivity of cobalt immobilized f-MWCNTs with excellent adsorptive polymer chitosan was assessed during the electro-oxidation of paracetamol. The resulting GCE modified f-MWCNTs/CTS-Co showed electrocatalytic activity towards the oxidation of PA. The electrochemical performances were investigated using cyclic voltammetry (CV), electrochemical impedance spectroscopy (EIS) and differential pulse voltammetry (DPV) methods. Under favorable experimental conditions, differential pulse voltammetry showed a linear dynamic range for paracetamol solution in the range of 0.1 to 400µmol L⁻¹ with a detection limit of 0.01 µmol L⁻¹. The proposed sensor exhibited significant selectivity for the paracetamol detection. The proposed method was successfully applied for the determination of paracetamol in commercial tablets and human serum sample.Keywords: nanomaterials, paracetamol, electrochemical technique, multi-wall carbon nanotube
Procedia PDF Downloads 2016088 Radar Fault Diagnosis Strategy Based on Deep Learning
Authors: Bin Feng, Zhulin Zong
Abstract:
Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.Keywords: radar system, fault diagnosis, deep learning, radar fault
Procedia PDF Downloads 906087 Applications of Multi-Path Futures Analyses for Homeland Security Assessments
Authors: John Hardy
Abstract:
A range of future-oriented intelligence techniques is commonly used by states to assess their national security and develop strategies to detect and manage threats, to develop and sustain capabilities, and to recover from attacks and disasters. Although homeland security organizations use future's intelligence tools to generate scenarios and simulations which inform their planning, there have been relatively few studies of the methods available or their applications for homeland security purposes. This study presents an assessment of one category of strategic intelligence techniques, termed Multi-Path Futures Analyses (MPFA), and how it can be applied to three distinct tasks for the purpose of analyzing homeland security issues. Within this study, MPFA are categorized as a suite of analytic techniques which can include effects-based operations principles, general morphological analysis, multi-path mapping, and multi-criteria decision analysis techniques. These techniques generate multiple pathways to potential futures and thereby generate insight into the relative influence of individual drivers of change, the desirability of particular combinations of pathways, and the kinds of capabilities which may be required to influence or mitigate certain outcomes. The study assessed eighteen uses of MPFA for homeland security purposes and found that there are five key applications of MPFA which add significant value to analysis. The first application is generating measures of success and associated progress indicators for strategic planning. The second application is identifying homeland security vulnerabilities and relationships between individual drivers of vulnerability which may amplify or dampen their effects. The third application is selecting appropriate resources and methods of action to influence individual drivers. The fourth application is prioritizing and optimizing path selection preferences and decisions. The fifth application is informing capability development and procurement decisions to build and sustain homeland security organizations. Each of these applications provides a unique perspective of a homeland security issue by comparing a range of potential future outcomes at a set number of intervals and by contrasting the relative resource requirements, opportunity costs, and effectiveness measures of alternative courses of action. These findings indicate that MPFA enhances analysts’ ability to generate tangible measures of success, identify vulnerabilities, select effective courses of action, prioritize future pathway preferences, and contribute to ongoing capability development in homeland security assessments.Keywords: homeland security, intelligence, national security, operational design, strategic intelligence, strategic planning
Procedia PDF Downloads 1396086 Learning Chinese Suprasegmentals for a Better Communicative Performance
Authors: Qi Wang
Abstract:
Chinese has become a powerful worldwide language and millions of learners are studying it all over the words. Chinese is a tone language with unique meaningful characters, which makes foreign learners master it with more difficulties. On the other hand, as each foreign language, the learners of Chinese first will learn the basic Chinese Sound Structure (the initials and finals, tones, Neutral Tone and Tone Sandhi). It’s quite common that in the following studies, teachers made a lot of efforts on drilling and error correcting, in order to help students to pronounce correctly, but ignored the training of suprasegmental features (e.g. stress, intonation). This paper analysed the oral data based on our graduation students (two-year program) from 2006-2013, presents the intonation pattern of our graduates to speak Chinese as second language -high and plain with heavy accents, without lexical stress, appropriate stop endings and intonation, which led to the misunderstanding in different real contexts of communications and the international official Chinese test, e.g. HSK (Chinese Proficiency Test), HSKK (HSK Speaking Test). This paper also demonstrated how the Chinese to use the suprasegmental features strategically in different functions and moods (declarative, interrogative, imperative, exclamatory and rhetorical intonations) in order to train the learners to achieve better Communicative Performance.Keywords: second language learning, suprasegmental, communication, HSK (Chinese Proficiency Test)
Procedia PDF Downloads 4366085 Professional Working Conditions, Mental Health And Mobility In The Hungarian Social Sector Preliminary Findings From A Multi-method Study
Authors: Ágnes Győri, Éva Perpék, Zsófia Bauer, Zsuzsanna Elek
Abstract:
The aim of the research (funded by Hungarian national grant, NFKI- FK 138315) is to examine the professional mobility, mental health and work environment of social workers with a complex approach. Previous international and Hungarian research has pointed out that those working in the helping professions are strongly exposed to the risk of emotional-mental-physical exhaustion due to stress. Mental and physical strain, as well as lack of coping (can) cause health problems, but its role in career change and high labor turnover has also been proven. Even though satisfaction with working conditions of those employed in the human service sector in the context of the stress burden has been researched extensively, there is a lack of large-sample international and Hungarian domestic studies exploring the effects of profession-specific conditions. Nor has it been examined how the specific features of the social profession and mental health affect the career mobility of the professionals concerned. In our research, these factors and their correlations are analyzed by means of mixed methodology, utilizing the benefits of netnographic big data analysis and a sector-specific quantitative survey. The netnographic analysis of open web content generated inside and outside the social profession offers a holistic overview of the influencing factors related to mental health and the work environment of social workers. On the one hand, the topics and topoi emerging in the external discourse concerning the sector are examined, and on the other hand, focus on mentions and streams of comments regarding the profession, burnout, stress, coping, as well as labor turnover and career changes among social professionals. The analysis focuses on new trends and changes in discourse that have emerged during and after the pandemic. In addition to the online conversation analysis, a survey of social professionals with a specific focus has been conducted. The questionnaire is based on input from the first two research phases. The applied approach underlines that the mobility paths of social professionals can only be understood if, apart from the general working conditions, the specific features of social work and the effects of certain aspects of mental health (emotional-mental-physical strain, resilience) are taken into account as well. In this paper, the preliminary results from this innovative methodological mix are presented, with the aim of highlighting new opportunities and dimensions in the research on social work. A gap in existing research is aimed to be filled both on a methodological and empirical level, and the Hungarian domestic findings can create a feasible and relevant framework for a further international investigation and cross-cultural comparative analysis. Said results can contribute to the foundation of organizational and policy-level interventions, targeted programs whereby the risk of burnout and the rate of career abandonment can be reduced. Exploring different aspects of resilience and mapping personality strengths can be a starting point for stress-management, motivation-building, and personality-development training for social professionals.Keywords: burnout, mixed methods, netnography, professional mobility, social work
Procedia PDF Downloads 1436084 A Technique for Image Segmentation Using K-Means Clustering Classification
Authors: Sadia Basar, Naila Habib, Awais Adnan
Abstract:
The paper presents the Technique for Image Segmentation Using K-Means Clustering Classification. The presented algorithms were specific, however, missed the neighboring information and required high-speed computerized machines to run the segmentation algorithms. Clustering is the process of partitioning a group of data points into a small number of clusters. The proposed method is content-aware and feature extraction method which is able to run on low-end computerized machines, simple algorithm, required low-quality streaming, efficient and used for security purpose. It has the capability to highlight the boundary and the object. At first, the user enters the data in the representation of the input. Then in the next step, the digital image is converted into groups clusters. Clusters are divided into many regions. The same categories with same features of clusters are assembled within a group and different clusters are placed in other groups. Finally, the clusters are combined with respect to similar features and then represented in the form of segments. The clustered image depicts the clear representation of the digital image in order to highlight the regions and boundaries of the image. At last, the final image is presented in the form of segments. All colors of the image are separated in clusters.Keywords: clustering, image segmentation, K-means function, local and global minimum, region
Procedia PDF Downloads 3766083 Heuristic Classification of Hydrophone Recordings
Authors: Daniel M. Wolff, Patricia Gray, Rafael de la Parra Venegas
Abstract:
An unsupervised machine listening system is constructed and applied to a dataset of 17,195 30-second marine hydrophone recordings. The system is then heuristically supplemented with anecdotal listening, contextual recording information, and supervised learning techniques to reduce the number of false positives. Features for classification are assembled by extracting the following data from each of the audio files: the spectral centroid, root-mean-squared values for each frequency band of a 10-octave filter bank, and mel-frequency cepstral coefficients in 5-second frames. In this way both time- and frequency-domain information are contained in the features to be passed to a clustering algorithm. Classification is performed using the k-means algorithm and then a k-nearest neighbors search. Different values of k are experimented with, in addition to different combinations of the available feature sets. Hypothesized class labels are 'primarily anthrophony' and 'primarily biophony', where the best class result conforming to the former label has 104 members after heuristic pruning. This demonstrates how a large audio dataset has been made more tractable with machine learning techniques, forming the foundation of a framework designed to acoustically monitor and gauge biological and anthropogenic activity in a marine environment.Keywords: anthrophony, hydrophone, k-means, machine learning
Procedia PDF Downloads 1706082 Using Analytic Hierarchy Process as a Decision-Making Tool in Project Portfolio Management
Authors: Darius Danesh, Michael J. Ryan, Alireza Abbasi
Abstract:
Project Portfolio Management (PPM) is an essential component of an organisation’s strategic procedures, which requires attention of several factors to envisage a range of long-term outcomes to support strategic project portfolio decisions. To evaluate overall efficiency at the portfolio level, it is essential to identify the functionality of specific projects as well as to aggregate those findings in a mathematically meaningful manner that indicates the strategic significance of the associated projects at a number of levels of abstraction. PPM success is directly associated with the quality of decisions made and poor judgment increases portfolio costs. Hence, various Multi-Criteria Decision Making (MCDM) techniques have been designed and employed to support the decision-making functions. This paper reviews possible option to improve the decision-making outcomes in the organisational portfolio management processes using the Analytic Hierarchy Process (AHP) both from academic and practical perspectives and will examine the usability, certainty and quality of the technique. The results of the study will also provide insight into the technical risk associated with current decision-making model to underpin initiative tracking and strategic portfolio management.Keywords: analytic hierarchy process, decision support systems, multi-criteria decision making, project portfolio management
Procedia PDF Downloads 321