Search results for: hybrid block methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17093

Search results for: hybrid block methods

13583 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models

Authors: Yoonsuh Jung

Abstract:

As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an "optimal" value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.

Keywords: cross validation, parameter averaging, parameter selection, regularization parameter search

Procedia PDF Downloads 405
13582 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 48
13581 Seed Yield and Quality of Late Planted Rabi Wheat Crop as Influenced by Basal and Foliar Application of Urea

Authors: Omvati Verma, Shyamashrre Roy

Abstract:

A field experiment was conducted with three basal nitrogen levels (90, 120 and 150 kg N/ha) and five foliar application of urea (absolute control, water spray, 3% urea spray at anthesis, 7 and 14 days after anthesis) at G.B. Pant University of Agriculture & Technology, Pantnagar, U.S. Nagar (Uttarakhand) during rabi season in a factorial randomized block design with three replications. Results revealed that nitrogen application of 150 kg/ha produced the highest seed yield, straw and biological yield and it was significantly superior to 90 kg N/ha and was at par with 120 kg N/ha. The number of tillers increased significantly with increase in nitrogen doses up to 150 kg N/ha. Spike length, number of grains per spike, grain weight per spike and thousand seed weight showed significantly higher values with 120 kg N/ha than 90 kg N/ha and were at par with that of 150 kg N/ha. Also, plant height showed similar trend. Leaf area index and chlorophyll content showed significant increase with an increase in nitrogen levels at different stages. In the case of foliar spray treatments, urea spray at anthesis showed highest value for yield and yield attributes. In case of spike length and thousand seed weight, it was similar with the urea spray at 7 and 14 days after anthesis, but for rest of the yield attributes, it was significantly higher than rest of the treatments. Among seed quality parameters protein and sedimentation value showed significant increase due to increase in nitrogen rates whereas, starch and hectolitre weight had a decreasing trend. Wet gluten content was not influenced by nitrogen levels. Foliar urea spray at anthesis resulted in highest value of protein and hectolitre weight whereas, urea spray at 7 days after anthesis showed highest value of sedimentation value and wet gluten content.

Keywords: foliar application, nitrogenous fertilizer, seed quality, yield

Procedia PDF Downloads 267
13580 MapReduce Logistic Regression Algorithms with RHadoop

Authors: Byung Ho Jung, Dong Hoon Lim

Abstract:

Logistic regression is a statistical method for analyzing a dataset in which there are one or more independent variables that determine an outcome. Logistic regression is used extensively in numerous disciplines, including the medical and social science fields. In this paper, we address the problem of estimating parameters in the logistic regression based on MapReduce framework with RHadoop that integrates R and Hadoop environment applicable to large scale data. There exist three learning algorithms for logistic regression, namely Gradient descent method, Cost minimization method and Newton-Rhapson's method. The Newton-Rhapson's method does not require a learning rate, while gradient descent and cost minimization methods need to manually pick a learning rate. The experimental results demonstrated that our learning algorithms using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also compared the performance of our Newton-Rhapson's method with gradient descent and cost minimization methods. The results showed that our newton's method appeared to be the most robust to all data tested.

Keywords: big data, logistic regression, MapReduce, RHadoop

Procedia PDF Downloads 260
13579 Risk Management in Islamic Banks: A Case Study of the Faisal Islamic Bank of Egypt

Authors: Mohamed Saad Ahmed Hussien

Abstract:

This paper discusses the risk management in Islamic banks and aims to determine the difference in the practices and methods of risk management in those banks compared to the conventional banks, and to make a case study of the biggest Islamic bank in Egypt (Faisal Islamic Bank of Egypt) to identify the most important financial risks faced and how to manage those risks. It was found that Islamic banks face two types of risks. The first type is similar to the risks in conventional banks; the second type is the additional risks which facing the Islamic banks only as a result of some Islamic modes of financing. With regard to the risk management, Islamic banks such as conventional banks applied the regulatory rules issued by the Central Banks and the Basel Committee; Islamic banks also applied the instructions and procedures issued by the Islamic Financial Services Board (IFSB). Also, Islamic banks are similar to the conventional banks in the practices and methods which they use to manage the risks. And there are some factors that may affect the risk management in Islamic banks, such as the size of the bank and the efficiency of the administration and the staff of the bank.

Keywords: conventional banks, Faisal Islamic Bank of Egypt, Islamic banks, risk management

Procedia PDF Downloads 446
13578 Scoping Review of Biological Age Measurement Composed of Biomarkers

Authors: Diego Alejandro Espíndola-Fernández, Ana María Posada-Cano, Dagnóvar Aristizábal-Ocampo, Jaime Alberto Gallo-Villegas

Abstract:

Background: With the increase in life expectancy, aging has been subject of frequent research, and therefore multiple strategies have been proposed to quantify the advance of the years based on the known physiology of human senescence. For several decades, attempts have been made to characterize these changes through the concept of biological age, which aims to integrate, in a measure of time, structural or functional variation through biomarkers in comparison with simple chronological age. The objective of this scoping review is to deepen the updated concept of measuring biological age composed of biomarkers in the general population and to summarize recent evidence to identify gaps and priorities for future research. Methods: A scoping review was conducted according to the five-phase methodology developed by Arksey and O'Malley through a search of five bibliographic databases to February 2021. Original articles were included with no time or language limit that described the biological age composed of at least two biomarkers in those over 18 years of age. Results: 674 articles were identified, of which 105 were evaluated for eligibility and 65 were included with information on the measurement of biological age composed of biomarkers. Articles from 1974 of 15 nationalities were found, most observational studies, in which clinical or paraclinical biomarkers were used, and 11 different methods described for the calculation of the composite biological age were informed. The outcomes reported were the relationship with the same measured biomarkers, specified risk factors, comorbidities, physical or cognitive functionality, and mortality. Conclusions: The concept of biological age composed of biomarkers has evolved since the 1970s and multiple methods of its quantification have been described through the combination of different clinical and paraclinical variables from observational studies. Future research should consider the population characteristics, and the choice of biomarkers against the proposed outcomes to improve the understanding of aging variables to direct effective strategies for a proper approach.

Keywords: biological age, biological aging, aging, senescence, biomarker

Procedia PDF Downloads 174
13577 Regulation, Evaluation and Incentives: An Analysis of Management Characteristics of Nonprofit Organizations in China

Authors: Wuqi Yang, Sufeng Li, Linda Zhai, Zhizhong Yuan, Shengli Wang

Abstract:

How to assess and evaluate a not-for-profit (NFP) organisation’s performance should be of concern to all stakeholders because, amongst other things, without correctly evaluating its performance might affect an NFP being not able to continue to meet its service objectives. Given the growing importance of this sector in China, more and more existing and potential donors, governments and others are starting to take an increased interest in the financial conditions and performance of NFPs. However, when these various groups look for ways (methods) to assess the performance of NFPs, they find there has been relatively little research conducted into methods for assessing the performance of NFPs in China. Furthermore, there does not appear to have been any research to date into the performance evaluation of Chinese NFPs. The focus of this paper is to investigate how the Chinese government manages and evaluates not-for-profit (NFP) organisations' performances in China. Through examining and evaluating the NFPs in China from different aspects such as business development, mission fulfillment, financial position and other status, this paper finds some institutional constraints currently facing by the NFPs in China. At the end of this paper, a new regulatory framework is proposed for regulators’ considerations. The research methods are based on a combination of a literature review; using Balanced Scorecard to assess NFPs in China; Case Study method is employed to analyse a charity foundation’s performance in Hebei Province and proposing solutions to resolve the current issues and challenges facing by the NFPs. These solutions include: formulating laws and regulations on NFPs; simplifying management procedures, introducing tax incentives, providing financial support and other incentives to support the development of non-profit organizations in China. This study provides the first step towards a greater understanding of the NFP performance evaluation in China. It is expected that the findings and solutions from this study will be useful to anyone involved with the China NFP sector; particularly CEOs, managers, bankers, independent auditors and government agencies.

Keywords: Chinese non-profit organizations, evaluation, management, supervision

Procedia PDF Downloads 166
13576 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder

Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh

Abstract:

In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.

Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization

Procedia PDF Downloads 103
13575 Carbon-Based Electrochemical Detection of Pharmaceuticals from Water

Authors: M. Ardelean, F. Manea, A. Pop, J. Schoonman

Abstract:

The presence of pharmaceuticals in the environment and especially in water has gained increasing attention. They are included in emerging class of pollutants, and for most of them, legal limits have not been set-up due to their impact on human health and ecosystem was not determined and/or there is not the advanced analytical method for their quantification. In this context, the development of various advanced analytical methods for the quantification of pharmaceuticals in water is required. The electrochemical methods are known to exhibit the great potential for high-performance analytical methods but their performance is in direct relation to the electrode material and the operating techniques. In this study, two types of carbon-based electrodes materials, i.e., boron-doped diamond (BDD) and carbon nanofiber (CNF)-epoxy composite electrodes have been investigated through voltammetric techniques for the detection of naproxen in water. The comparative electrochemical behavior of naproxen (NPX) on both BDD and CNF electrodes was studied by cyclic voltammetry, and the well-defined peak corresponding to NPX oxidation was found for each electrode. NPX oxidation occurred on BDD electrode at the potential value of about +1.4 V/SCE (saturated calomel electrode) and at about +1.2 V/SCE for CNF electrode. The sensitivities for NPX detection were similar for both carbon-based electrode and thus, CNF electrode exhibited superiority in relation to the detection potential. Differential-pulsed voltammetry (DPV) and square-wave voltammetry (SWV) techniques were exploited to improve the electroanalytical performance for the NPX detection, and the best results related to the sensitivity of 9.959 µA·µM-1 were achieved using DPV. In addition, the simultaneous detection of NPX and fluoxetine -a very common antidepressive drug, also present in water, was studied using CNF electrode and very good results were obtained. The detection potential values that allowed a good separation of the detection signals together with the good sensitivities were appropriate for the simultaneous detection of both tested pharmaceuticals. These results reclaim CNF electrode as a valuable tool for the individual/simultaneous detection of pharmaceuticals in water.

Keywords: boron-doped diamond electrode, carbon nanofiber-epoxy composite electrode, emerging pollutans, pharmaceuticals

Procedia PDF Downloads 269
13574 Evaluation of Shock Sensitivity of Nano-Scaled 1,3,5-Trinitro-1,3,5-Triazacyclohexane Using Small Scale Gap Test

Authors: Kang-In Lee, Woo-Jin Lee, Keun-Deuk Lee, Ju-Seung Chae

Abstract:

In this study, small scale gap test (SSGT) was performed to measure shock sensitivity of nano-scaled 1,3,5-trinitro-1,3,5-triazacyclohexane (RDX) samples. The shock sensitivity of energetic materials is usually evaluated by the method of large-scale gap test (LSGT) that has a higher reliability than other methods. But LSGT has the disadvantage that it takes a high cost and time by using a large amount of explosive. In this experiment, nano-scaled RDX samples were prepared by spray crystallization in two different drying methods. In addition, 30μm RDX sample produced by precipitation crystallization and 5μm RDX sample produced by fluid energy mill process were tested to compare shock sensitivity. The study of shock sensitivity measured by small-scale gap test shows that small sized RDX particles have greater insensitivity. As a result, we infer SSGT method has higher reliability compared to the literature as measurement of shock sensitivity of energetic materials.

Keywords: nano-scaled RDX, SSGT(small scale gap test), shock sensitivity, RDX

Procedia PDF Downloads 240
13573 A Study of Using Different Printed Circuit Board Design Methods on Ethernet Signals

Authors: Bahattin Kanal, Nursel Akçam

Abstract:

Data transmission size and frequency requirements are increasing rapidly in electronic communication protocols. Increasing data transmission speeds have made the design of printed circuit boards much more important. It is important to carefully examine the requirements and make analyses before and after the design of the digital electronic circuit board. It delves into impedance matching techniques, signal trace routing considerations, and the impact of layer stacking on signal performance. The paper extensively explores techniques for minimizing crosstalk issues and interference, presenting a holistic perspective on design strategies to optimize the quality of high-speed signals. Through a comprehensive review of these design methodologies, this study aims to provide insights into achieving reliable and high-performance printed circuit board layouts for these signals. In this study, the effect of different design methods on Ethernet signals was examined from the type of S parameters. Siemens company HyperLynx software tool was used for the analyses.

Keywords: HyperLynx, printed circuit board, s parameters, ethernet

Procedia PDF Downloads 15
13572 Tourism Area Development Optimation Based on Solar-Generated Renewable Energy Technology at Karimunjawa, Central Java Province, Indonesia

Authors: Yanuar Tri Wahyu Saputra, Ramadhani Pamapta Putra

Abstract:

Karimunjawa is one among Indonesian islands which is lacking of electricity supply. Despite condition above, Karimunjawa is an important tourism object in Indonesia's Central Java Province. Solar Power Plant is a potential technology to be applied in Karimunjawa, in order to fulfill the island's electrical supply need and to increase daily life and tourism quality among tourists and local population. This optimation modeling of Karimunjawa uses HOMER software program. The data we uses include wind speed data in Karimunjawa from BMKG (Indonesian Agency for Meteorology, Climatology and Geophysics), annual weather data in Karimunjawa from NASA, electricity requirements assumption data based on number of houses and business infrastructures in Karimunjawa. This modeling aims to choose which three system categories offer the highest financial profit with the lowest total Net Present Cost (NPC). The first category uses only PV with 8000 kW of electrical power and NPC value of $6.830.701. The second category uses hybrid system which involves both 1000 kW PV and 100 kW generator which results in total NPC of $6.865.590. The last category uses only generator with 750 kW of electrical power that results in total NPC of $ 16.368.197, the highest total NPC among the three categories. Based on the analysis above, we can conclude that the most optimal way to fulfill the electricity needs in Karimunjawa is to use 8000 kW PV with lower maintenance cost.

Keywords: Karimunjawa, renewable energy, solar power plant, HOMER

Procedia PDF Downloads 456
13571 An Exploration of Architecture Design Methods in Urban Fringe Belt Based on Typo-Morphological Research- A Case of Expansion Project of the Second Middle School in Xuancheng, China

Authors: Dong Yinan, Zhou Zijie

Abstract:

Urban fringe belt is an important part of urban morphology research. Different from the relatively fixed central district of city, the position of fringe belt is changing. In the process of urban expansion, the original fringe belt is likely to be merged by the new-built city, even become new city public center. During the change, we are facing the dialectic between restoring the organicity of old urban form and creating new urban image. There are lots of relevant research in urban scale, but when we focus on building scale, rare design method can be proposed, thus some new individual building cannot match the overall urban planning intent. The expansion project of the second middle school in Xuancheng is facing this situation. The existing campus is located in the south fringe belt of Xuancheng, Anhui province, China, adjacent to farmland and ponds. While based on the Xucheng urban planning, the farmland and ponds will be transformed into a big lake, around which new public center will be built; the expansion of the school becomes an important part of the boundary of the new public center. Therefore, the expansion project faces challenges from both urban and building scale. In urban scale, we analyze and summarize the fringe belt characters through the reading of existing and future urban organism, in order to determine the form of the expansion project. Meanwhile, in building scale, we study on different types of school buildings and select appropriate type which can satisfy to both urban form and school function. This research attempts to investigate design methods based on an under construction project in Xuancheng, a historic city in southeast China. It also aims to bridge the gap from urban design to individual building design through the typo-morphological research.

Keywords: design methods, urban fringe belt, typo-morphological research, middle school

Procedia PDF Downloads 487
13570 Building an Arithmetic Model to Assess Visual Consistency in Townscape

Authors: Dheyaa Hussein, Peter Armstrong

Abstract:

The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.

Keywords: townscape, urban design, visual assessment, visual consistency

Procedia PDF Downloads 303
13569 Vertically Coupled III-V/Silicon Single Mode Laser with a Hybrid Grating Structure

Authors: Zekun Lin, Xun Li

Abstract:

Silicon photonics has gained much interest and extensive research for a promising aspect for fabricating compact, high-speed and low-cost photonic devices compatible with complementary metal-oxide-semiconductor (CMOS) process. Despite the remarkable progress made on the development of silicon photonics, high-performance, cost-effective, and reliable silicon laser sources are still missing. In this work, we present a 1550 nm III-V/silicon laser design with stable single-mode lasing property and robust and high-efficiency vertical coupling. The InP cavity consists of two uniform Bragg grating sections at sides for mode selection and feedback, as well as a central second-order grating for surface emission. A grating coupler is etched on the SOI waveguide by which the light coupling between the parallel III-V and SOI is reached vertically rather than by evanescent wave coupling. Laser characteristic is simulated and optimized by the traveling-wave model (TWM) and a Green’s function analysis as well as a 2D finite difference time domain (FDTD) method for the coupling process. The simulation results show that single-mode lasing with SMSR better than 48dB is achievable, and the threshold current is less than 15mA with a slope efficiency of around 0.13W/A. The coupling efficiency is larger than 42% and possesses a high tolerance with less than 10% reduction for 10 um horizontal or 15 um vertical dislocation. The design can be realized by standard flip-chip bonding techniques without co-fabrication of III-V and silicon or precise alignment.

Keywords: III-V/silicon integration, silicon photonics, single mode laser, vertical coupling

Procedia PDF Downloads 138
13568 Linearly Polarized Single Photon Emission from Nonpolar, Semipolar and Polar Quantum Dots in GaN/InGaN Nanowires

Authors: Snezana Lazic, Zarko Gacevic, Mark Holmes, Ekaterina Chernysheva, Marcus Müller, Peter Veit, Frank Bertram, Juergen Christen, Yasuhiko Arakawa, Enrique Calleja

Abstract:

The study reports how the pencil-like morphology of a homoepitaxially grown GaN nanowire can be exploited for the fabrication of a thin conformal InGaN nanoshell, hosting nonpolar, semipolar and polar single photon sources (SPSs). All three SPS types exhibit narrow emission lines (FWHM~0.35 - 2 meV) and high degrees of linear optical polarization (P > 70%) in the low-temperature micro-photoluminescence (µ-PL) experiments and are characterized by a pronounced antibunching in the photon correlation measurements (gcorrected(2)(0) < 0.3). The quantum-dot-like exciton localization centers induced by compositional fluctuations within the InGaN nanoshell are identified as the driving mechanism for the single photon emission. As confirmed by the low-temperature transmission electron microscopy combined with cathodoluminescence (TEM-CL) study, the crystal region (i.e. non-polar m-, semi-polar r- and polar c-facets) hosting the single photon emitters strongly affects their emission wavelength, which ranges from ultra-violet for the non-polar to visible for the polar SPSs. The photon emission lifetime is also found to be facet-dependent and varies from sub-nanosecond time scales for the non- and semi-polar SPSs to a few nanoseconds for the polar ones. These differences are mainly attributed to facet-dependent indium content and electric field distribution across the hosting InGaN nanoshell. The hereby reported pencil-like InGaN nanoshell is the first single nanostructure able to host all three types of single photon emitters and is thus a promising building block for tunable quantum light devices integrated into future photonic and optoelectronic circuits.

Keywords: GaN nanowire, InGaN nanoshell, linear polarization, nonpolar, semipolar, polar quantum dots, single-photon sources

Procedia PDF Downloads 379
13567 Review of Microstructure, Mechanical and Corrosion Behavior of Aluminum Matrix Composite Reinforced with Agro/Industrial Waste Fabricated by Stir Casting Process

Authors: Mehari Kahsay, Krishna Murthy Kyathegowda, Temesgen Berhanu

Abstract:

Aluminum matrix composites have gained focus on research and industrial use, especially those not requiring extreme loading or thermal conditions, for the last few decades. Their relatively low cost, simple processing and attractive properties are the reasons for the widespread use of aluminum matrix composites in the manufacturing of automobiles, aircraft, military, and sports goods. In this article, the microstructure, mechanical, and corrosion behaviors of the aluminum metal matrix were reviewed, focusing on the stir casting fabrication process and usage of agro/industrial waste reinforcement particles. The results portrayed that mechanical properties like tensile strength, ultimate tensile strength, hardness, percentage of elongation, impact, and fracture toughness are highly dependent on the amount, kind, and size of reinforcing particles. Additionally, uniform distribution, wettability of reinforcement particles, and the porosity level of the resulting composite also affect the mechanical and corrosion behaviors of aluminum matrix composites. The two-step stir-casting process resulted in better wetting characteristics, a lower porosity level, and a uniform distribution of particles with proper handling of process parameters. On the other hand, the inconsistent and contradicting results on corrosion behavior regarding monolithic and hybrid aluminum matrix composites need further study.

Keywords: microstructure, mechanical behavior, corrosion, aluminum matrix composite

Procedia PDF Downloads 56
13566 A Recommender System for Job Seekers to Show up Companies Based on Their Psychometric Preferences and Company Sentiment Scores

Authors: A. Ashraff

Abstract:

The increasing importance of the web as a medium for electronic and business transactions has served as a catalyst or rather a driving force for the introduction and implementation of recommender systems. Recommender Systems play a major role in processing and analyzing thousands of data rows or reviews and help humans make a purchase decision of a product or service. It also has the ability to predict whether a particular user would rate a product or service based on the user’s profile behavioral pattern. At present, Recommender Systems are being used extensively in every domain known to us. They are said to be ubiquitous. However, in the field of recruitment, it’s not being utilized exclusively. Recent statistics show an increase in staff turnover, which has negatively impacted the organization as well as the employee. The reasons being company culture, working flexibility (work from home opportunity), no learning advancements, and pay scale. Further investigations revealed that there are lacking guidance or support, which helps a job seeker find the company that will suit him best, and though there’s information available about companies, job seekers can’t read all the reviews by themselves and get an analytical decision. In this paper, we propose an approach to study the available review data on IT companies (score their reviews based on user review sentiments) and gather information on job seekers, which includes their Psychometric evaluations. Then presents the job seeker with useful information or rather outputs on which company is most suitable for the job seeker. The theoretical approach, Algorithmic approach and the importance of such a system will be discussed in this paper.

Keywords: psychometric tests, recommender systems, sentiment analysis, hybrid recommender systems

Procedia PDF Downloads 96
13565 A Machine Learning Approach for Performance Prediction Based on User Behavioral Factors in E-Learning Environments

Authors: Naduni Ranasinghe

Abstract:

E-learning environments are getting more popular than any other due to the impact of COVID19. Even though e-learning is one of the best solutions for the teaching-learning process in the academic process, it’s not without major challenges. Nowadays, machine learning approaches are utilized in the analysis of how behavioral factors lead to better adoption and how they related to better performance of the students in eLearning environments. During the pandemic, we realized the academic process in the eLearning approach had a major issue, especially for the performance of the students. Therefore, an approach that investigates student behaviors in eLearning environments using a data-intensive machine learning approach is appreciated. A hybrid approach was used to understand how each previously told variables are related to the other. A more quantitative approach was used referred to literature to understand the weights of each factor for adoption and in terms of performance. The data set was collected from previously done research to help the training and testing process in ML. Special attention was made to incorporating different dimensionality of the data to understand the dependency levels of each. Five independent variables out of twelve variables were chosen based on their impact on the dependent variable, and by considering the descriptive statistics, out of three models developed (Random Forest classifier, SVM, and Decision tree classifier), random forest Classifier (Accuracy – 0.8542) gave the highest value for accuracy. Overall, this work met its goals of improving student performance by identifying students who are at-risk and dropout, emphasizing the necessity of using both static and dynamic data.

Keywords: academic performance prediction, e learning, learning analytics, machine learning, predictive model

Procedia PDF Downloads 139
13564 Study on Seismic Performance of Reinforced Soil Walls in Order to Offer Modified Pseudo Static Method

Authors: Majid Yazdandoust

Abstract:

This study, tries to suggest a design method based on displacement using finite difference numerical modeling in reinforcing soil retaining wall with steel strip. In this case, dynamic loading characteristics such as duration, frequency, peak ground acceleration, geometrical characteristics of reinforced soil structure and type of the site are considered to correct the pseudo static method and finally introduce the pseudo static coefficient as a function of seismic performance level and peak ground acceleration. For this purpose, the influence of dynamic loading characteristics, reinforcement length, height of reinforced system and type of the site are investigated on seismic behavior of reinforcing soil retaining wall with steel strip. Numerical results illustrate that the seismic response of this type of wall is highly dependent to cumulative absolute velocity, maximum acceleration, and height and reinforcement length so that the reinforcement length can be introduced as the main factor in shape of failure. Considering the loading parameters, mechanically stabilized earth wall parameters and type of the site showed that the used method in this study leads to most efficient designs in comparison with other methods which are generally suggested in cods that are usually based on limit-equilibrium concept. The outputs show the over-estimation of equilibrium design methods in comparison with proposed displacement based methods here.

Keywords: pseudo static coefficient, seismic performance design, numerical modeling, steel strip reinforcement, retaining walls, cumulative absolute velocity, failure shape

Procedia PDF Downloads 477
13563 The Effects of Ellagic Acid on Rat Heart Induced Tobacco Smoke

Authors: Nalan Kaya, D. Ozlem Dabak, Gonca Ozan, Elif Erdem, Enver Ozan

Abstract:

One of the common causes of cardiovascular disease (CVD) is smoking. Moreover, tobacco smoke decreases the amount of oxygen that the blood can carry and increases the tendency for blood clots. Ellagic acid is a powerful antioxidant found especially in red fruits. It was shown to block atherosclerotic process suppressing oxidative stress and inflammation. The aim of this study was to examine the protective effects of ellagic acid against oxidative damage on heart tissues of rats induced by tobacco smoke. Twenty-four male adult (8 weeks old) Spraque-Dawley rats were divided randomly into 4 equal groups: group I (Control), group II (Tobacco smoke), group III (Tobacco smoke + corn oil) and group IV (Tobacco smoke + ellagic acid). The rats in group II, III and IV, were exposed to tobacco smoke 1 hour twice a day for 12 weeks. In addition to tobacco smoke exposure, 12 mg/kg ellagic acid (dissolved in corn oil), was applied to the rats in group IV by oral gavage. An equal amount of corn oil used in solving ellagic acid was applied to the rats by oral gavage in group III. At the end of the experimental period, rats were decapitated. Heart tissues and blood samples were taken. Histological and biochemical analyzes were performed. Vascular congestion, hyperemic areas, inflammatory cell infiltration and increased connective tissue in the perivascular area were observed in tobacco smoke and tobacco smoke + corn oil groups. Increased connective tissue in the perivascular area, hemorrhage and inflammatory cell infiltration were decreased in tobacco smoke + EA group. Group-II GSH level was not changed (significantly), CAT, SOD, GPx activities were significantly higher than group-I. Compared to group-II, group-IV GSH, SOD, CAT, GPx activities were increased, and MDA level was decreased significantly. Group-II and Group-III levels were similar. The results indicate that ellagic acid could protect the heart tissue from the tobacco smoke harmful effects.

Keywords: ellagic acid, heart, rat, tobacco smoke

Procedia PDF Downloads 212
13562 Use of Regression Analysis in Determining the Length of Plastic Hinge in Reinforced Concrete Columns

Authors: Mehmet Alpaslan Köroğlu, Musa Hakan Arslan, Muslu Kazım Körez

Abstract:

Basic objective of this study is to create a regression analysis method that can estimate the length of a plastic hinge which is an important design parameter, by making use of the outcomes of (lateral load-lateral displacement hysteretic curves) the experimental studies conducted for the reinforced square concrete columns. For this aim, 170 different square reinforced concrete column tests results have been collected from the existing literature. The parameters which are thought affecting the plastic hinge length such as cross-section properties, features of material used, axial loading level, confinement of the column, longitudinal reinforcement bars in the columns etc. have been obtained from these 170 different square reinforced concrete column tests. In the study, when determining the length of plastic hinge, using the experimental test results, a regression analysis have been separately tested and compared with each other. In addition, the outcome of mentioned methods on determination of plastic hinge length of the reinforced concrete columns has been compared to other methods available in the literature.

Keywords: columns, plastic hinge length, regression analysis, reinforced concrete

Procedia PDF Downloads 460
13561 Design, Development and Testing of Polymer-Glass Microfluidic Chips for Electrophoretic Analysis of Biological Sample

Authors: Yana Posmitnaya, Galina Rudnitskaya, Tatyana Lukashenko, Anton Bukatin, Anatoly Evstrapov

Abstract:

An important area of biological and medical research is the study of genetic mutations and polymorphisms that can alter gene function and cause inherited diseases and other diseases. The following methods to analyse DNA fragments are used: capillary electrophoresis and electrophoresis on microfluidic chip (MFC), mass spectrometry with electrophoresis on MFC, hybridization assay on microarray. Electrophoresis on MFC allows to analyse small volumes of samples with high speed and throughput. A soft lithography in polydimethylsiloxane (PDMS) was chosen for operative fabrication of MFCs. A master-form from silicon and photoresist SU-8 2025 (MicroChem Corp.) was created for the formation of micro-sized structures in PDMS. A universal topology which combines T-injector and simple cross was selected for the electrophoretic separation of the sample. Glass K8 and PDMS Sylgard® 184 (Dow Corning Corp.) were used for fabrication of MFCs. Electroosmotic flow (EOF) plays an important role in the electrophoretic separation of the sample. Therefore, the estimate of the quantity of EOF and the ways of its regulation are of interest for the development of the new methods of the electrophoretic separation of biomolecules. The following methods of surface modification were chosen to change EOF: high-frequency (13.56 MHz) plasma treatment in oxygen and argon at low pressure (1 mbar); 1% aqueous solution of polyvinyl alcohol; 3% aqueous solution of Kolliphor® P 188 (Sigma-Aldrich Corp.). The electroosmotic mobility was evaluated by the method of Huang X. et al., wherein the borate buffer was used. The influence of physical and chemical methods of treatment on the wetting properties of the PDMS surface was controlled by the sessile drop method. The most effective way of surface modification of MFCs, from the standpoint of obtaining the smallest value of the contact angle and the smallest value of the EOF, was the processing with aqueous solution of Kolliphor® P 188. This method of modification has been selected for the treatment of channels of MFCs, which are used for the separation of mixture of oligonucleotides fluorescently labeled with the length of chain with 10, 20, 30, 40 and 50 nucleotides. Electrophoresis was performed on the device MFAS-01 (IAI RAS, Russia) at the separation voltage of 1500 V. 6% solution of polydimethylacrylamide with the addition of 7M carbamide was used as the separation medium. The separation time of components of the mixture was determined from electropherograms. The time for untreated MFC was ~275 s, and for the ones treated with solution of Kolliphor® P 188 – ~ 220 s. Research of physical-chemical methods of surface modification of MFCs allowed to choose the most effective way for reducing EOF – the modification with aqueous solution of Kolliphor® P 188. In this case, the separation time of the mixture of oligonucleotides decreased about 20%. The further optimization of method of modification of channels of MFCs will allow decreasing the separation time of sample and increasing the throughput of analysis.

Keywords: electrophoresis, microfluidic chip, modification, nucleic acid, polydimethylsiloxane, soft lithography

Procedia PDF Downloads 398
13560 Trace Elements in Yerba Mate from Brazil and Argentina by Inductively Coupled Plasma Mass Spectrometry

Authors: F. V. Matta, C. M. Donnelly, M. B. Jaafar, N. I. Ward

Abstract:

‘Yerba Mate’ (Ilex paraguariensis) is a native plant from South America with the main producers being Argentina and Brazil. ‘Mate’ is widely consumed in Argentina, Brazil, Uruguay and Paraguay. The most popular format is as an infusion made from dried leaves of a traditional cup, roasted material in tea bags or iced tea infusions. There are many alleged health benefits resulted from mate consumption, even though there is a lack of conclusive research published in the international literature. The main objective of this study was to develop and evaluate the sample preparation and instrumental analysis stages involved in the determination of trace elements in yerba mate using inductively coupled plasma mass spectrometry (ICP-MS). Specific details on the methods of sample digestion, validation of the ICP-MS analysis especially for polyatomic ion correction and matrix effects associated with the complex medium of mate will be presented. More importantly, mate produced in Brazil and Argentina, is subject to different soil conditions, methods of cultivation and production, especially for loose leaves and tea bags. The highest concentrations for loose mate leaf were for (mg/kg, dry weight): aluminium (253.6 – 506.9 for Brazil (Bra), 230.0 – 541.8 for Argentina (Arg), respectively), manganese (378.3 – 762.6 Bra; 440.8 – 879.9 Arg), iron (32.5 – 85.7 Bra; 28.2 – 132.9 Arg), zinc (28.2 – 91.1 Bra; 39.1 – 92.3 Arg), nickel (2.2 – 4.3 Bra; 2.9 – 10.8 Arg) and copper (4.8 – 9.1 Bra; 4.3 – 9.2 Arg), with lower levels of chromium, cobalt, selenium, molybdenum, cadmium, lead and arsenic. Elemental levels of mate leaf consumed in tea bags were found to be higher, mainly due to only using leaf material (as opposed to leaf and twig for loose packed product). Further implications of the way of consuming yerba mate will be presented, including different infusion methods in Brazil and Argentina. This research provides for the first time an extensive evaluation of mate products from both countries and the possible implications of specific trace elements, especially Mn, Fe, Se, Cu and Zn and the various health claims of consuming yerba mate.

Keywords: beverage analysis, ICP-MS, trace elements, yerba mate

Procedia PDF Downloads 216
13559 New Bioactive Compounds from Two Chrysanthemum Saharian Species (Asteraceae) Growing in Algeria

Authors: Zahia Kabouche, Ouissem Gherboudj, Naima Boutaghane, Ahmed Kabouche, Laurence Voutquenne-Nazabadioko

Abstract:

Chrysanthemum herbs (Asteraceae) are extensively used as food additives and in folk medicine. Anti-cancer, anti-human immunodeficiency virus type 1 (HIV-1), anti-inflammatory, antinociceptive and antiproliferative activities as well as antioxidant effects have been reported for Chrysanthemum species. We report the isolation and identification of flavonoids and new and known terpenoids from the endemic species, C. macrocarpum and C. deserticolum “guertoufa”, used in Algerian Sahara as tea drinks and in “couscous” and soups “Chorba”. Structures of the isolated compounds were established by 1-D and 2-D homo and hetero-nuclear NMR (1H, 13C, COSY, HSQC, HMBC, and NOESY), mass spectrometry, UV and comparison with literature data. C. deserticolum extracts were tested by four methods to identify the antioxidant activity namely, ABTS•+, DPPH• scavenging, CUPRAC and ferrous-ions chelating activity methods. Anti-inflammatory, antinociceptive, antiproliferative and antioxidant activities of C. macrocarpum extracts and isolated compounds are also reported here.

Keywords: Chrysanthemum macrocarpum, C. deserticolum, flavonoids, terpenoids, antioxidant, anti-inflammatory, anti-proliferative

Procedia PDF Downloads 324
13558 A Mixed Methods Study: Evaluation of Experiential Learning Techniques throughout a Nursing Curriculum to Promote Empathy

Authors: Joan Esper Kuhnly, Jess Holden, Lynn Shelley, Nicole Kuhnly

Abstract:

Empathy serves as a foundational nursing principle inherent in the nurse’s ability to form those relationships from which to care for patients. Evidence supports, including empathy in nursing and healthcare education, but there is limited data on what methods are effective to do so. Building evidence supports experiential and interactive learning methods to be effective for students to gain insight and perspective from a personalized experience. The purpose of this project is to evaluate learning activities designed to promote the attainment of empathic behaviors across 5 levels of the nursing curriculum. Quantitative analysis will be conducted on data from pre and post-learning activities using the Toronto Empathy Questionnaire. The main hypothesis, that simulation learning activities will increase empathy, will be examined using a repeated measures Analysis of Variance (ANOVA) on Pre and Post Toronto Empathy Questionnaire scores for three simulation activities (Stroke, Poverty, Dementia). Pearson product-moment correlations will be conducted to examine the relationships between continuous demographic variables, such as age, credits earned, and years practicing, with the dependent variable of interest, Post Test Toronto Empathy Scores. Krippendorff’s method of content analysis will be conducted to identify the quantitative incidence of empathic responses. The researchers will use Colaizzi’s descriptive phenomenological method to describe the students’ simulation experience and understand its impact on caring and empathy behaviors employing bracketing to maintain objectivity. The results will be presented, answering multiple research questions. The discussion will be relevant to results and educational pedagogy in the nursing curriculum as they relate to the attainment of empathic behaviors.

Keywords: curriculum, empathy, nursing, simulation

Procedia PDF Downloads 104
13557 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan

Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad

Abstract:

Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.

Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules

Procedia PDF Downloads 90
13556 A Clustering-Based Approach for Weblog Data Cleaning

Authors: Amine Ganibardi, Cherif Arab Ali

Abstract:

This paper addresses the data cleaning issue as a part of web usage data preprocessing within the scope of Web Usage Mining. Weblog data recorded by web servers within log files reflect usage activity, i.e., End-users’ clicks and underlying user-agents’ hits. As Web Usage Mining is interested in End-users’ behavior, user-agents’ hits are referred to as noise to be cleaned-off before mining. Filtering hits from clicks is not trivial for two reasons, i.e., a server records requests interlaced in sequential order regardless of their source or type, website resources may be set up as requestable interchangeably by end-users and user-agents. The current methods are content-centric based on filtering heuristics of relevant/irrelevant items in terms of some cleaning attributes, i.e., website’s resources filetype extensions, website’s resources pointed by hyperlinks/URIs, http methods, user-agents, etc. These methods need exhaustive extra-weblog data and prior knowledge on the relevant and/or irrelevant items to be assumed as clicks or hits within the filtering heuristics. Such methods are not appropriate for dynamic/responsive Web for three reasons, i.e., resources may be set up to as clickable by end-users regardless of their type, website’s resources are indexed by frame names without filetype extensions, web contents are generated and cancelled differently from an end-user to another. In order to overcome these constraints, a clustering-based cleaning method centered on the logging structure is proposed. This method focuses on the statistical properties of the logging structure at the requested and referring resources attributes levels. It is insensitive to logging content and does not need extra-weblog data. The used statistical property takes on the structure of the generated logging feature by webpage requests in terms of clicks and hits. Since a webpage consists of its single URI and several components, these feature results in a single click to multiple hits ratio in terms of the requested and referring resources. Thus, the clustering-based method is meant to identify two clusters based on the application of the appropriate distance to the frequency matrix of the requested and referring resources levels. As the ratio clicks to hits is single to multiple, the clicks’ cluster is the smallest one in requests number. Hierarchical Agglomerative Clustering based on a pairwise distance (Gower) and average linkage has been applied to four logfiles of dynamic/responsive websites whose click to hits ratio range from 1/2 to 1/15. The optimal clustering set on the basis of average linkage and maximum inter-cluster inertia results always in two clusters. The evaluation of the smallest cluster referred to as clicks cluster under the terms of confusion matrix indicators results in 97% of true positive rate. The content-centric cleaning methods, i.e., conventional and advanced cleaning, resulted in a lower rate 91%. Thus, the proposed clustering-based cleaning outperforms the content-centric methods within dynamic and responsive web design without the need of any extra-weblog. Such an improvement in cleaning quality is likely to refine dependent analysis.

Keywords: clustering approach, data cleaning, data preprocessing, weblog data, web usage data

Procedia PDF Downloads 164
13555 Preliminary Performance of a Liquid Oxygen-Liquid Methane Pintle Injector for Thrust Variations

Authors: Brunno Vasques

Abstract:

Due to the non-toxic nature and high performance in terms of vacuum specific impulse and density specific impulse, the combination of liquid oxygen and liquid methane have been identified as a promising option for future space vehicle systems. Applications requiring throttling capability include specific missions such as rendezvous, planetary landing and de-orbit as well as weapon systems. One key challenge in throttling liquid rocket engines is maintaining an adequate pressure drop across the injection elements, which is necessary to provide good propellant atomization and mixing as well as system stability. The potential scalability of pintle injectors, their great suitability to throttling and inherent combustion stability characteristics led to investigations using a variety of propellant combinations, including liquid oxygen and hydrogen and fluorine-oxygen and methane. Presented here are the preliminary performance and heat transfer information obtained during hot-fire testing of a pintle injector running on liquid oxygen and liquid methane propellants. The specific injector design selected for this purpose is a multi-configuration building block version with replaceable injection elements, providing flexibility to accommodate hardware modifications with minimum difficulty. On the basis of single point runs and the use of a copper/nickel segmented calorimetric combustion chamber and associated transient temperature measurement, the characteristic velocity efficiency, injector footprint and heat fluxes could be established for the first proposed pintle configuration as a function of injection velocity- and momentum-ratios. A description of the test-bench is presented as well as a discussion of irregularities encountered during testing, such as excessive heat flux into the pintle tip resulting from certain operating conditions.

Keywords: green propellants, hot-fire performance, rocket engine throttling, pintle injector

Procedia PDF Downloads 321
13554 Safety of Ports, Harbours, Marine Terminals: Application of Quantitative Risk Assessment

Authors: Dipak Sonawane, Sudarshan Daga, Somesh Gupta

Abstract:

Quantitative risk assessment (QRA) is a very precise and consistent approach to defining the likelihood, consequence and severity of a major incident/accident. A variety of hazardous cargoes in bulk, such as hydrocarbons and flammable/toxic chemicals, are handled at various ports. It is well known that most of the operations are hazardous, having the potential of damaging property, causing injury/loss of life and, in some cases, the threat of environmental damage. In order to ensure adequate safety towards life, environment and property, the application of scientific methods such as QRA is inevitable. By means of these methods, comprehensive hazard identification, risk assessment and appropriate implementation of Risk Control measures can be carried out. In this paper, the authors, based on their extensive experience in Risk Analysis for ports and harbors, have exhibited how QRA can be used in practice to minimize and contain risk to tolerable levels. A specific case involving the operation for unloading of hydrocarbon at a port is presented. The exercise provides confidence that the method of QRA, as proposed by the authors, can be used appropriately for the identification of hazards and risk assessment of Ports and Terminals.

Keywords: quantitative risk assessment, hazard assessment, consequence analysis, individual risk, societal risk

Procedia PDF Downloads 69