Search results for: combinatorial testing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3071

Search results for: combinatorial testing

2471 Speech Recognition Performance by Adults: A Proposal for a Battery for Marathi

Authors: S. B. Rathna Kumar, Pranjali A Ujwane, Panchanan Mohanty

Abstract:

The present study aimed to develop a battery for assessing speech recognition performance by adults in Marathi. A total of four word lists were developed by considering word frequency, word familiarity, words in common use, and phonemic balance. Each word list consists of 25 words (15 monosyllabic words in CVC structure and 10 monosyllabic words in CVCV structure). Equivalence analysis and performance-intensity function testing was carried using the four word lists on a total of 150 native speakers of Marathi belonging to different regions of Maharashtra (Vidarbha, Marathwada, Khandesh and Northern Maharashtra, Pune, and Konkan). The subjects were further equally divided into five groups based on above mentioned regions. It was found that there was no significant difference (p > 0.05) in the speech recognition performance between groups for each word list and between word lists for each group. Hence, the four word lists developed were equally difficult for all the groups and can be used interchangeably. The performance-intensity (PI) function curve showed semi-linear function, and the groups’ mean slope of the linear portions of the curve indicated an average linear slope of 4.64%, 4.73%, 4.68%, and 4.85% increase in word recognition score per dB for list 1, list 2, list 3 and list 4 respectively. Although, there is no data available on speech recognition tests for adults in Marathi, most of the findings of the study are in line with the findings of research reports on other languages. The four word lists, thus developed, were found to have sufficient reliability and validity in assessing speech recognition performance by adults in Marathi.

Keywords: speech recognition performance, phonemic balance, equivalence analysis, performance-intensity function testing, reliability, validity

Procedia PDF Downloads 346
2470 Modelling of Geotechnical Data Using Geographic Information System and MATLAB for Eastern Ahmedabad City, Gujarat

Authors: Rahul Patel

Abstract:

Ahmedabad, a city located in western India, is experiencing rapid growth due to urbanization and industrialization. It is projected to become a metropolitan city in the near future, resulting in various construction activities. Soil testing is necessary before construction can commence, requiring construction companies and contractors to periodically conduct soil testing. The focus of this study is on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical (Geo)-database involves three steps: collecting borehole data from reputable sources, verifying the accuracy and redundancy of the data, and standardizing and organizing the geotechnical information for integration into the database. Once the database is complete, it is integrated with GIS, allowing users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. This GIS map enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This study highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers.

Keywords: ArcGIS, borehole data, geographic information system, geo-database, interpolation, SPT N-value, soil classification, Φ-Value, bearing capacity

Procedia PDF Downloads 63
2469 Study of Ageing in the Marine Environment of Bonded Composite Structures by Ultrasonic Guided Waves. Comparison of the Case of a Conventional Carbon-epoxy Composite and a Recyclable Resin-Based Composite

Authors: Hamza Hafidi Alaoui, Damien Leduc, Mounsif Ech Cherif El Kettani

Abstract:

This study is dedicated to the evaluation of the ageing of turbine blades in sea conditions, based on ultrasonic Non Destructive Testing (NDT) methods. This study is being developed within the framework of the European Interreg TIGER project. The Tidal Stream Industry Energiser Project, known as TIGER, is the biggest ever Interreg project driving collaboration and cost reductionthrough tidal turbine installations in the UK and France. The TIGER project will drive the growth of tidal stream energy to become a greater part of the energy mix, with significant benefits for coastal communities. In the bay of Paimpol-Bréhat (Brittany), different samples of composite material and bonded composite/composite structures have been immersed at the same time near a turbine. The studied samples are either conventional carbon-epoxy composite samples or composite samples based on a recyclable resin (called recyclamine). One of the objectives of the study is to compare the ageing of the two types of structure. A sample of each structure is picked up every 3 to 6 months and analyzed using ultrasonic guided waves and bulk waves and compared to reference samples. In order to classify the damage level as a function of time spent under the sea, the measure have been compared to a rheological model based on the Finite Elements Method (FEM). Ageing of the composite material, as well as that of the adhesive, is identified. The aim is to improve the quality of the turbine blade structure in terms of longevity and reduced maintenance needs.

Keywords: non-destructive testing, ultrasound, composites, guides waves

Procedia PDF Downloads 210
2468 Normalized P-Laplacian: From Stochastic Game to Image Processing

Authors: Abderrahim Elmoataz

Abstract:

More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.

Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems

Procedia PDF Downloads 497
2467 Influence of Magnetized Water on the Split Tensile Strength of Concrete

Authors: Justine Cyril E. Nunag, Nestor B. Sabado Jr., Jienne Chester M. Tolosa

Abstract:

Concrete has high compressive strength but a low-tension strength. The small tensile strength of concrete is regarded as its primary weakness, which is why it is typically reinforced with steel, a material that is resistant to tension. Even with steel, however, cracking can occur. In strengthening concrete, only a few researchers have modified the water to be used in a concrete mix. This study aims to compare the split tensile strength of normal structural concrete to concrete prepared with magnetic water and a quick setting admixture. In this context, magnetic water is defined as tap water that has undergone a magnetic process to become magnetized water. To test the hypothesis that magnetized concrete leads to higher split tensile strength, twenty concrete specimens were made. There were five groups, each with five samples, that were differentiated by the number of cycles (0, 50, 100, and 150). The data from the Universal Testing Machine's split tensile strength were then analyzed using various statistical models and tests to determine the significant effect of magnetized water. The result showed a moderate (+0.579) but still significant degree of correlation. The researchers also discovered that using magnetic water for 50 cycles did not result in a significant increase in the concrete's split tensile strength, which influenced the analysis of variance. These results suggest that a concrete mix containing magnetic water and a quick-setting admixture alters the typical split tensile strength of normal concrete. Magnetic water has a significant impact on concrete tensile strength. The hardness property of magnetic water influenced the split tensile strength of concrete. In addition, a higher number of cycles results in a strong water magnetism. The laboratory test results show that a higher cycle translates to a higher tensile strength.

Keywords: hardness property, magnetic water, quick-setting admixture, split tensile strength, universal testing machine

Procedia PDF Downloads 137
2466 D-Wave Quantum Computing Ising Model: A Case Study for Forecasting of Heat Waves

Authors: Dmytro Zubov, Francesco Volponi

Abstract:

In this paper, D-Wave quantum computing Ising model is used for the forecasting of positive extremes of daily mean air temperature. Forecast models are designed with two to five qubits, which represent 2-, 3-, 4-, and 5-day historical data respectively. Ising model’s real-valued weights and dimensionless coefficients are calculated using daily mean air temperatures from 119 places around the world, as well as sea level (Aburatsu, Japan). In comparison with current methods, this approach is better suited to predict heat wave values because it does not require the estimation of a probability distribution from scarce observations. Proposed forecast quantum computing algorithm is simulated based on traditional computer architecture and combinatorial optimization of Ising model parameters for the Ronald Reagan Washington National Airport dataset with 1-day lead-time on learning sample (1975-2010 yr). Analysis of the forecast accuracy (ratio of successful predictions to total number of predictions) on the validation sample (2011-2014 yr) shows that Ising model with three qubits has 100 % accuracy, which is quite significant as compared to other methods. However, number of identified heat waves is small (only one out of nineteen in this case). Other models with 2, 4, and 5 qubits have 20 %, 3.8 %, and 3.8 % accuracy respectively. Presented three-qubit forecast model is applied for prediction of heat waves at other five locations: Aurel Vlaicu, Romania – accuracy is 28.6 %; Bratislava, Slovakia – accuracy is 21.7 %; Brussels, Belgium – accuracy is 33.3 %; Sofia, Bulgaria – accuracy is 50 %; Akhisar, Turkey – accuracy is 21.4 %. These predictions are not ideal, but not zeros. They can be used independently or together with other predictions generated by different method(s). The loss of human life, as well as environmental, economic, and material damage, from extreme air temperatures could be reduced if some of heat waves are predicted. Even a small success rate implies a large socio-economic benefit.

Keywords: heat wave, D-wave, forecast, Ising model, quantum computing

Procedia PDF Downloads 486
2465 Solid State Drive End to End Reliability Prediction, Characterization and Control

Authors: Mohd Azman Abdul Latif, Erwan Basiron

Abstract:

A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.

Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control

Procedia PDF Downloads 161
2464 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 261
2463 Evaluation of the Grammar Questions at the Undergraduate Level

Authors: Preeti Gacche

Abstract:

A considerable part of undergraduate level English Examination papers is devoted to grammar. Hence the grammar questions in the question papers are evaluated and the opinions of both students and teachers about them are obtained and analyzed. A grammar test of 100 marks is administered to 43 students to check their performance. The question papers have been evaluated by 10 different teachers and their scores compared. The analysis of 38 University question papers reveals that on an average 20 percent marks are allotted to grammar. Almost all the grammar topics are tested. Abundant use of grammatical terminology is observed in the questions. Decontextualization, repetition, possibility of multiple correct answers and grammatical errors in framing the questions have been observed. Opinions of teachers and students about grammar questions vary in many respects. The students responses are analyzed medium-wise and sex-wise. The Medium at the School level and the sex of the students are found to play no role as far as interest in the study of grammar is concerned. English medium students solve grammar questions intuitively whereas non-English medium students are required to recollect the rules of grammar. Prepositions, Verbs, Articles and Model auxiliaries are found to be easy topics for most students whereas the use of conjunctions is the most difficult topic. Out of context items of grammar are difficult to answer in comparison with contextualized items of grammar. Hence contextualized texts to test grammar items are desirable. No formal training in setting questions is imparted to teachers by the competent authorities like the University. They need to be trained in testing. Statistically there is no significant change of score with the change in the rater in testing of grammar items. There is scope of future improvement. The question papers need to be evaluated and feedback needs to be obtained from students and teachers for future improvement.

Keywords: context, evaluation, grammar, tests

Procedia PDF Downloads 340
2462 Predicting Foreign Direct Investment of IC Design Firms from Taiwan to East and South China Using Lotka-Volterra Model

Authors: Bi-Huei Tsai

Abstract:

This work explores the inter-region investment behaviors of integrated circuit (IC) design industry from Taiwan to China using the amount of foreign direct investment (FDI). According to the mutual dependence among different IC design industrial locations, Lotka-Volterra model is utilized to explore the FDI interactions between South and East China. Effects of inter-regional collaborations on FDI flows into China are considered. Evolutions of FDIs into South China for IC design industry significantly inspire the subsequent FDIs into East China, while FDIs into East China for Taiwan’s IC design industry significantly hinder the subsequent FDIs into South China. The supply chain along IC industry includes IC design, manufacturing, packing and testing enterprises. I C manufacturing, packaging and testing industries depend on IC design industry to gain advanced business benefits. The FDI amount from Taiwan’s IC design industry into East China is the greatest among the four regions: North, East, Mid-West and South China. The FDI amount from Taiwan’s IC design industry into South China is the second largest. If IC design houses buy more equipment and bring more capitals in South China, those in East China will have pressure to undertake more FDIs into East China to maintain the leading position advantages of the supply chain in East China. On the other hand, as the FDIs in East China rise, the FDIs in South China will successively decline since capitals have concentrated in East China. Prediction of Lotka-Volterra model in FDI trends is accurate because the industrial interactions between the two regions are included. Finally, this work confirms that the FDI flows cannot reach a stable equilibrium point, so the FDI inflows into East and South China will expand in the future.

Keywords: Lotka-Volterra model, foreign direct investment, competitive, Equilibrium analysis

Procedia PDF Downloads 347
2461 In vitro Skin Model for Enhanced Testing of Antimicrobial Textiles

Authors: Steven Arcidiacono, Robert Stote, Erin Anderson, Molly Richards

Abstract:

There are numerous standard test methods for antimicrobial textiles that measure activity against specific microorganisms. However, many times these results do not translate to the performance of treated textiles when worn by individuals. Standard test methods apply a single target organism grown under optimal conditions to a textile, then recover the organism to quantitate and determine activity; this does not reflect the actual performance environment that consists of polymicrobial communities in less than optimal conditions or interaction of the textile with the skin substrate. Here we propose the development of in vitro skin model method to bridge the gap between lab testing and wear studies. The model will consist of a defined polymicrobial community of 5-7 commensal microbes simulating the skin microbiome, seeded onto a solid tissue platform to represent the skin. The protocol would entail adding a non-commensal test organism of interest to the defined community and applying a textile sample to the solid substrate. Following incubation, the textile would be removed and the organisms recovered, which would then be quantitated to determine antimicrobial activity. Important parameters to consider include identification and assembly of the defined polymicrobial community, growth conditions to allow the establishment of a stable community, and choice of skin surrogate. This model could answer the following questions: 1) is the treated textile effective against the target organism? 2) How is the defined community affected? And 3) does the textile cause unwanted effects toward the skin simulant? The proposed model would determine activity under conditions comparable to the intended application and provide expanded knowledge relative to current test methods.

Keywords: antimicrobial textiles, defined polymicrobial community, in vitro skin model, skin microbiome

Procedia PDF Downloads 125
2460 Qualitative Analysis of Current Child Custody Evaluation Practices

Authors: Carolyn J. Ortega, Stephen E. Berger

Abstract:

The role of the custody evaluator is perhaps one of the most controversial and risky endeavors in clinical practice. Complaints filed with licensing boards regarding a child-custody evaluation constitute the second most common reason for such an event. Although the evaluator is expected to answer for the family-law court what is in the “best interest of the child,” there is a lack of clarity on how to establish this in any empirically validated manner. Hence, practitioners must contend with a nebulous framework in formulating their methodological procedures that inherently places them at risk in an already litigious context. This study sought to qualitatively investigate patterns of practice among doctoral practitioners conducting child custody evaluations in the area of Southern California. Ten psychologists were interviewed who devoted between 25 and 100% of their California private practice to custody work. All held Ph.D. degrees with a range of eight to 36 years of experience in custody work. Semi-structured interviews were used to investigate assessment practices, ensure adherence to guidelines, risk management, and qualities of evaluators. Forty-three Specific Themes were identified using Interpretive Phenomenological Analysis (IPA). Seven Higher Order Themes clustered on salient factors such as use of Ethics, Law, Guidelines; Parent Variables; Child Variables; Psychologist Variables; Testing; Literature; and Trends. Evaluators were aware of the ever-present reality of a licensure complaint and thus presented idiosyncratic descriptions of risk management considerations. Ambiguity about quantifying and validly tapping parenting abilities was also reviewed. Findings from this study suggested a high reliance on unstructured and observational methods in child custody practices.

Keywords: forensic psychology, psychological testing, assessment methodology, child custody

Procedia PDF Downloads 272
2459 Modeling of Thermally Induced Acoustic Emission Memory Effects in Heterogeneous Rocks with Consideration for Fracture Develo

Authors: Vladimir A. Vinnikov

Abstract:

The paper proposes a model of an inhomogeneous rock mass with initially random distribution of microcracks on mineral grain boundaries. It describes the behavior of cracks in a medium under the effect of thermal field, the medium heated instantaneously to a predetermined temperature. Crack growth occurs according to the concept of fracture mechanics provided that the stress intensity factor K exceeds the critical value of Kc. The modeling of thermally induced acoustic emission memory effects is based on the assumption that every event of crack nucleation or crack growth caused by heating is accompanied by a single acoustic emission event. Parameters of the thermally induced acoustic emission memory effect produced by cyclic heating and cooling (with the temperature amplitude increasing from cycle to cycle) were calculated for several rock texture types (massive, banded, and disseminated). The study substantiates the adaptation of the proposed model to humidity interference with the thermally induced acoustic emission memory effect. The influence of humidity on the thermally induced acoustic emission memory effect in quasi-homogeneous and banded rocks is estimated. It is shown that such modeling allows the structure and texture of rocks to be taken into account and the influence of interference factors on the distinctness of the thermally induced acoustic emission memory effect to be estimated. The numerical modeling can be used to obtain information about the thermal impacts on rocks in the past and determine the degree of rock disturbance by means of non-destructive testing.

Keywords: degree of rock disturbance, non-destructive testing, thermally induced acoustic emission memory effects, structure and texture of rocks

Procedia PDF Downloads 256
2458 A Retrospective Cross-Sectional Study on the Prevalence and Factors Associated with Virological Non-Suppression among HIV-Positive Adult Patients on Antiretroviral Therapy in Woliso Town, Oromia, Ethiopia

Authors: Teka Haile, Behailu Hawulte, Solomon Alemayehu

Abstract:

Background: HIV virological failure still remains a problem in HV/AIDS treatment and care. This study aimed to describe the prevalence and identify the factors associated with viral non-suppression among HIV-positive adult patients on antiretroviral therapy in Woliso Town, Oromia, Ethiopia. Methods: A retrospective cross-sectional study was conducted among 424 HIV-positive patient’s attending antiretroviral therapy (ART) in Woliso Town during the period from August 25, 2020 to August 30, 2020. Data collected from patient medical records were entered into Epi Info version 2.3.2.1 and exported to SPSS version 21.0 for analysis. Logistic regression analysis was done to identify factors associated with viral load non-suppression, and statistical significance of odds ratios were declared using 95% confidence interval and p-value < 0.05. Results: A total of 424 patients were included in this study. The mean age (± SD) of the study participants was 39.88 (± 9.995) years. The prevalence of HIV viral load non-suppression was 55 (13.0%) with 95% CI (9.9-16.5). Second-line ART treatment regimen (Adjusted Odds Ratio (AOR) = 8.98, 95% Confidence Interval (CI): 2.64, 30.58) and routine viral load testing (AOR = 0.01, 95% CI: 0.001, 0.02) were significantly associated with virological non-suppression. Conclusion: Virological non-suppression was high, which hinders the achievement of the third global 95 target. The second-line regimen and routine viral load testing were significantly associated with virological non-suppression. It suggests the need to assess the effectiveness of antiretroviral drugs for epidemic control. It also clearly shows the need to decentralize third-line ART treatment for those patients in need.

Keywords: virological non-suppression, HIV-positive, ART, Woliso town, Ethiopia

Procedia PDF Downloads 129
2457 Energy Retrofitting Application Research to Achieve Energy Efficiency in Hot-Arid Climates in Residential Buildings: A Case Study of Saudi Arabia

Authors: A. Felimban, A. Prieto, U. Knaack, T. Klein

Abstract:

This study aims to present an overview of recent research in building energy-retrofitting strategy applications and analyzing them within the context of hot arid climate regions which is in this case study represented by the Kingdom of Saudi Arabia. The main goal of this research is to do an analytical study of recent research approaches to show where the primary gap in knowledge exists and outline which possible strategies are available that can be applied in future research. Also, the paper focuses on energy retrofitting strategies at a building envelop level. The study is limited to specific measures within the hot arid climate region. Scientific articles were carefully chosen as they met the expression criteria, such as retrofitting, energy-retrofitting, hot-arid, energy efficiency, residential buildings, which helped narrow the research scope. Then the papers were explored through descriptive analysis and justified results within the Saudi context in order to draw an overview of future opportunities from the field of study for the last two decades. The conclusions of the analysis of the recent research confirmed that the field of study had a research shortage on investigating actual applications and testing of newly introduced energy efficiency applications, lack of energy cost feasibility studies and there was also a lack of public awareness. In terms of research methods, it was found that simulation software was a major instrument used in energy retrofitting application research. The main knowledge gaps that were identified included the need for certain research regarding actual application testing; energy retrofitting strategies application feasibility; the lack of research on the importance of how strategies apply first followed by the user acceptance of developed scenarios.

Keywords: energy efficiency, energy retrofitting, hot arid, Saudi Arabia

Procedia PDF Downloads 115
2456 A Handheld Light Meter Device for Methamphetamine Detection in Oral Fluid

Authors: Anindita Sen

Abstract:

Oral fluid is a promising diagnostic matrix for drugs of abuse compared to urine and serum. Detection of methamphetamine in oral fluid would pave way for the easy evaluation of impairment in drivers during roadside drug testing as well as ensure safe working environments by facilitating evaluation of impairment in employees at workplaces. A membrane-based point-of-care (POC) friendly pre-treatment technique has been developed which aided elimination of interferences caused by salivary proteins and facilitated the demonstration of methamphetamine detection in saliva using a gold nanoparticle based colorimetric aptasensor platform. It was found that the colorimetric response in saliva was always suppressed owing to the matrix effects. By navigating the challenging interfering issues in saliva, we were successfully able to detect methamphetamine at nanomolar levels in saliva offering immense promise for the translation of these platforms for on-site diagnostic systems. This subsequently motivated the development of a handheld portable light meter device that can reliably transduce the aptasensors colorimetric response into absorbance, facilitating quantitative detection of analyte concentrations on-site. This is crucial due to the prevalent unreliability and sensitivity problems of the conventional drug testing kits. The fabricated light meter device response was validated against a standard UV-Vis spectrometer to confirm reliability. The portable and cost-effective handheld detector device features sensitivity comparable to the well-established UV-Vis benchtop instrument and the easy-to-use device could potentially serve as a prototype for a commercial device in the future.

Keywords: aptasensors, colorimetric gold nanoparticle assay, point-of-care, oral fluid

Procedia PDF Downloads 30
2455 Enhancing the Interpretation of Group-Level Diagnostic Results from Cognitive Diagnostic Assessment: Application of Quantile Regression and Cluster Analysis

Authors: Wenbo Du, Xiaomei Ma

Abstract:

With the empowerment of Cognitive Diagnostic Assessment (CDA), various domains of language testing and assessment have been investigated to dig out more diagnostic information. What is noticeable is that most of the extant empirical CDA-based research puts much emphasis on individual-level diagnostic purpose with very few concerned about learners’ group-level performance. Even though the personalized diagnostic feedback is the unique feature that differentiates CDA from other assessment tools, group-level diagnostic information cannot be overlooked in that it might be more practical in classroom setting. Additionally, the group-level diagnostic information obtained via current CDA always results in a “flat pattern”, that is, the mastery/non-mastery of all tested skills accounts for the two highest proportion. In that case, the outcome does not bring too much benefits than the original total score. To address these issues, the present study attempts to apply cluster analysis for group classification and quantile regression analysis to pinpoint learners’ performance at different proficiency levels (beginner, intermediate and advanced) thus to enhance the interpretation of the CDA results extracted from a group of EFL learners’ reading performance on a diagnostic reading test designed by PELDiaG research team from a key university in China. The results show that EM method in cluster analysis yield more appropriate classification results than that of CDA, and quantile regression analysis does picture more insightful characteristics of learners with different reading proficiencies. The findings are helpful and practical for instructors to refine EFL reading curriculum and instructional plan tailored based on the group classification results and quantile regression analysis. Meanwhile, these innovative statistical methods could also make up the deficiencies of CDA and push forward the development of language testing and assessment in the future.

Keywords: cognitive diagnostic assessment, diagnostic feedback, EFL reading, quantile regression

Procedia PDF Downloads 138
2454 A Computational Framework for Decoding Hierarchical Interlocking Structures with SL Blocks

Authors: Yuxi Liu, Boris Belousov, Mehrzad Esmaeili Charkhab, Oliver Tessmann

Abstract:

This paper presents a computational solution for designing reconfigurable interlocking structures that are fully assembled with SL Blocks. Formed by S-shaped and L-shaped tetracubes, SL Block is a specific type of interlocking puzzle. Analogous to molecular self-assembly, the aggregation of SL blocks will build a reversible hierarchical and discrete system where a single module can be numerously replicated to compose semi-interlocking components that further align, wrap, and braid around each other to form complex high-order aggregations. These aggregations can be disassembled and reassembled, responding dynamically to design inputs and changes with a unique capacity for reconfiguration. To use these aggregations as architectural structures, we developed computational tools that automate the configuration of SL blocks based on architectural design objectives. There are three critical phases in our work. First, we revisit the hierarchy of the SL block system and devise a top-down-type design strategy. From this, we propose two key questions: 1) How to translate 3D polyominoes into SL block assembly? 2) How to decompose the desired voxelized shapes into a set of 3D polyominoes with interlocking joints? These two questions can be considered the Hamiltonian path problem and the 3D polyomino tiling problem. Then, we derive our solution to each of them based on two methods. The first method is to construct the optimal closed path from an undirected graph built from the voxelized shape and translate the node sequence of the resulting path into the assembly sequence of SL blocks. The second approach describes interlocking relationships of 3D polyominoes as a joint connection graph. Lastly, we formulate the desired shapes and leverage our methods to achieve their reconfiguration within different levels. We show that our computational strategy will facilitate the efficient design of hierarchical interlocking structures with a self-replicating geometric module.

Keywords: computational design, SL-blocks, 3D polyomino puzzle, combinatorial problem

Procedia PDF Downloads 118
2453 Mechanism of in Vitro Inhibition of Alpha-Amylase, Alpha-Glucosidase by Ethanolic Extracts of Polyalthia Longifolia, Its in Vitro Cytotoxicity on L6, Vero Cell-Lines and Influence of Glucose Uptake by Rat Hemi-Diaphragm

Authors: P. Gayathri, G. P. Jeyanthi

Abstract:

The bark of Polyalthia longifolia is used in ayurvedic system of medicine for the manangement of various ailments including diabetes mellitus. The bark of P. longifolia extracts was extracted using various polar and non-polar solvents and tested for inhibition of alpha-amylase and alpha-glucosidase among which the ethanolic extracts were found to be more potent. The ethanolic extracts of the bark were tested for the in vitro inhibition of alpha-amylase using starch as substrate and alpha-glucosidase using p-nitro phenyl alpha-D-gluco pyranoside as substrate to establish its in vitro antidiabetic effect. The mechanism of inhibition was determined by Dixon plot and Cornish-Bowden plot. The cytotoxic effect of the extract was tested on L6 and Vero cell-lines. The extract was partially purified by TLC. The individual effect of the ethanolic extract, TLC fractions and its combinatorial effect with insulin and glibenclamide on glucose uptake by rat hemi-diaphragm were studied.Results revealed that the ethanolic extracts of Polyalthia longifolia bark exhibited competitive inhibition of alpha-amylase and alpha-glucosidase. The extracts were also found not to be cytotoxic at the highest dose of 1 mg/mL. Glucose uptake study revealed that the extract alone and when combined with insulin, decreased the glucose uptake when compared to insulin control, however the purified TLC fractions exhibited significantly higher (p<0.05) glucose uptake by the rat hemi-diaphragm when compared to insulin. The study shows various possible mechanism of in vitro antidiabetic effect of the P. longifolia bark.

Keywords: alpha-amylase, alpha-glucosidase, dixon, cornish-bowden, L6 , Vero cell-lines, glucose uptake, polyalthia longifolia bark, ethanolic extract, TLC fractions

Procedia PDF Downloads 463
2452 Pros and Cons of Nanoparticles on Health

Authors: Amber Shahi, Ayesha Tazeen, Abdus Samad, Shama Parveen

Abstract:

Nanoparticles (NPs) are tiny particles. According to the International Organization for Standardization, the size range of NPs is in the nanometer range (1-100 nm). They show distinct properties that are not shown by larger particles of the same material. NPs are currently being used in different fields due to their unique physicochemical nature. NPs are a boon for medical sciences, environmental sciences, electronics, and textile industries. However, there is growing concern about their potential adverse effects on human health. This poster presents a comprehensive review of the current literature on the pros and cons of NPs on human health. The poster will discuss the various types of interactions of NPs with biological systems. There are a number of beneficial uses of NPs in the field of health and environmental welfare. NPs are very useful in disease diagnosis, antimicrobial action, and the treatment of diseases like Alzheimer’s. They can also cross the blood-brain barrier, making them capable of treating brain diseases. Additionally, NPs can target specific tumors and be used for cancer treatment. To treat environmental health, NPs also act as catalytic converters to reduce pollution from the environment. On the other hand, NPs also have some negative impacts on the human body, such as being cytotoxic and genotoxic. They can also affect the reproductive system, such as the testis and ovary, and sexual behavior. The poster will further discuss the routes of exposure of NPs. The poster will conclude with a discussion of the current regulations and guidelines on the use of NPs in various applications. It will highlight the need for further research and the development of standardized toxicity testing methods to ensure the safe use of NPs in various applications. When using NPs in diagnosis and treatment, we should also take into consideration their safe concentration in the body. Overall, this poster aims to provide a comprehensive overview of the pros and cons of NPs on human health and to promote awareness and understanding of the potential risks and benefits associated with their use.

Keywords: disease diagnosis, human health, nanoparticles, toxicity testing

Procedia PDF Downloads 70
2451 Hearing Conservation Program for Vector Control Workers: Short-Term Outcomes from a Cluster-Randomized Controlled Trial

Authors: Rama Krishna Supramanian, Marzuki Isahak, Noran Naqiah Hairi

Abstract:

Noise-induced hearing loss (NIHL) is one of the highest recorded occupational diseases, despite being preventable. Hearing Conservation Program (HCP) is designed to protect workers hearing and prevent them from developing hearing impairment due to occupational noise exposures. However, there is still a lack of evidence regarding the effectiveness of this program. The purpose of this study was to determine the effectiveness of a Hearing Conservation Program (HCP) in preventing or reducing audiometric threshold changes among vector control workers. This study adopts a cluster randomized controlled trial study design, with district health offices as the unit of randomization. Nine district health offices were randomly selected and 183 vector control workers were randomized to intervention or control group. The intervention included a safety and health policy, noise exposure assessment, noise control, distribution of appropriate hearing protection devices, training and education program and audiometric testing. The control group only underwent audiometric testing. Audiometric threshold changes observed in the intervention group showed improvement in the hearing threshold level for all frequencies except 500 Hz and 8000 Hz for the left ear. The hearing threshold changes range from 1.4 dB to 5.2 dB with largest improvement at higher frequencies mainly 4000 Hz and 6000 Hz. Meanwhile for the right ear, the mean hearing threshold level remained similar at 4000 Hz and 6000 Hz after 3 months of intervention. The Hearing Conservation Program (HCP) is effective in preserving the hearing of vector control workers involved in fogging activity as well as increasing their knowledge, attitude and practice towards noise-induced hearing loss (NIHL).

Keywords: adult, hearing conservation program, noise-induced hearing loss, vector control worker

Procedia PDF Downloads 148
2450 The Contribution of Corpora to the Investigation of Cross-Linguistic Equivalence in Phraseology: A Contrastive Analysis of Russian and Italian Idioms

Authors: Federica Floridi

Abstract:

The long tradition of contrastive idiom research has essentially been focusing on three domains: the comparison of structural types of idioms (e.g. verbal idioms, idioms with noun-phrase structure, etc.), the description of idioms belonging to the same thematic groups (Sachgruppen), the identification of different types of cross-linguistic equivalents (i.e. full equivalents, partial equivalents, phraseological parallels, non-equivalents). The diastratic, diachronic and diatopic aspects of the compared idioms, as well as their syntactic, pragmatic and semantic properties, have been rather ignored. Corpora (both monolingual and parallel) give the opportunity to investigate the actual use of correlating idioms in authentic texts of L1 and L2. Adopting the corpus-based approach, it is possible to draw attention to the frequency of occurrence of idioms, their syntactic embedding, their potential syntactic transformations (e.g., nominalization, passivization, relativization, etc.), their combinatorial possibilities, the variations of their lexical structure, their connotations in terms of stylistic markedness or register. This paper aims to present the results of a contrastive analysis of Russian and Italian idioms referring to the concepts of ‘beginning’ and ‘end’, that has been carried out by using the Russian National Corpus and the ‘La Repubblica’ corpus. Beyond the digital corpora, bilingual dictionaries, like Skvorcova - Majzel’, Dobrovol’skaja, Kovalev, Čerdanceva, as well as monolingual resources, have been consulted. The study has shown that many of the idioms that have been traditionally indicated as cross-linguistic equivalents on bilingual dictionaries cannot be considered correspondents. The findings demonstrate that even those idioms, that are formally identical in Russian and Italian and are presumably derived from the same source (e.g., conceptual metaphor, Bible, classical mythology, World literature), exhibit differences regarding usage. The ultimate purpose of this article is to highlight that it is necessary to review and improve the existing bilingual dictionaries considering the empirical data collected in corpora. The materials gathered in this research can contribute to this sense.

Keywords: corpora, cross-linguistic equivalence, idioms, Italian, Russian

Procedia PDF Downloads 134
2449 Teaching Practices for Subverting Significant Retentive Learner Errors in Arithmetic

Authors: Michael Lousis

Abstract:

The systematic identification of the most conspicuous and significant errors made by learners during three-years of testing of their progress in learning Arithmetic throughout the development of the Kassel Project in England and Greece was accomplished. How much retentive these errors were over three-years in the officially provided school instruction of Arithmetic in these countries has also been shown. The learners’ errors in Arithmetic stemmed from a sample, which was comprised of two hundred (200) English students and one hundred and fifty (150) Greek students. The sample was purposefully selected according to the students’ participation in each testing session in the development of the three-year project, in both domains simultaneously in Arithmetic and Algebra. Specific teaching practices have been invented and are presented in this study for subverting these learners’ errors, which were found out to be retentive to the level of the nationally provided mathematical education of each country. The invention and the development of these proposed teaching practices were founded on the rationality of the theoretical accounts concerning the explanation, prediction and control of the errors, on the conceptual metaphor and on an analysis, which tried to identify the required cognitive components and skills of the specific tasks, in terms of Psychology and Cognitive Science as applied to information-processing. The aim of the implementation of these instructional practices is not only the subversion of these errors but the achievement of the mathematical competence, as this was defined to be constituted of three elements: appropriate representations - appropriate meaning - appropriately developed schemata. However, praxis is of paramount importance, because there is no independent of science ‘real-truth’ and because praxis serves as quality control when it takes the form of a cognitive method.

Keywords: arithmetic, cognitive science, cognitive psychology, information-processing paradigm, Kassel project, level of the nationally provided mathematical education, praxis, remedial mathematical teaching practices, retentiveness of errors

Procedia PDF Downloads 306
2448 A 3D Cell-Based Biosensor for Real-Time and Non-Invasive Monitoring of 3D Cell Viability and Drug Screening

Authors: Yuxiang Pan, Yong Qiu, Chenlei Gu, Ping Wang

Abstract:

In the past decade, three-dimensional (3D) tumor cell models have attracted increasing interest in the field of drug screening due to their great advantages in simulating more accurately the heterogeneous tumor behavior in vivo. Drug sensitivity testing based on 3D tumor cell models can provide more reliable in vivo efficacy prediction. The gold standard fluorescence staining is hard to achieve the real-time and label-free monitoring of the viability of 3D tumor cell models. In this study, micro-groove impedance sensor (MGIS) was specially developed for dynamic and non-invasive monitoring of 3D cell viability. 3D tumor cells were trapped in the micro-grooves with opposite gold electrodes for the in-situ impedance measurement. The change of live cell number would cause inversely proportional change to the impedance magnitude of the entire cell/matrigel to construct and reflect the proliferation and apoptosis of 3D cells. It was confirmed that 3D cell viability detected by the MGIS platform is highly consistent with the standard live/dead staining. Furthermore, the accuracy of MGIS platform was demonstrated quantitatively using 3D lung cancer model and sophisticated drug sensitivity testing. In addition, the parameters of micro-groove impedance chip processing and measurement experiments were optimized in details. The results demonstrated that the MGIS and 3D cell-based biosensor and would be a promising platform to improve the efficiency and accuracy of cell-based anti-cancer drug screening in vitro.

Keywords: micro-groove impedance sensor, 3D cell-based biosensors, 3D cell viability, micro-electromechanical systems

Procedia PDF Downloads 121
2447 A Radiomics Approach to Predict the Evolution of Prostate Imaging Reporting and Data System Score 3/5 Prostate Areas in Multiparametric Magnetic Resonance

Authors: Natascha C. D'Amico, Enzo Grossi, Giovanni Valbusa, Ala Malasevschi, Gianpiero Cardone, Sergio Papa

Abstract:

Purpose: To characterize, through a radiomic approach, the nature of areas classified PI-RADS (Prostate Imaging Reporting and Data System) 3/5, recognized in multiparametric prostate magnetic resonance with T2-weighted (T2w), diffusion and perfusion sequences with paramagnetic contrast. Methods and Materials: 24 cases undergoing multiparametric prostate MR and biopsy were admitted to this pilot study. Clinical outcome of the PI-RADS 3/5 was found through biopsy, finding 8 malignant tumours. The analysed images were acquired with a Philips achieva 1.5T machine with a CE- T2-weighted sequence in the axial plane. Semi-automatic tumour segmentation was carried out on MR images using 3DSlicer image analysis software. 45 shape-based, intensity-based and texture-based features were extracted and represented the input for preprocessing. An evolutionary algorithm (a TWIST system based on KNN algorithm) was used to subdivide the dataset into training and testing set and select features yielding the maximal amount of information. After this pre-processing 20 input variables were selected and different machine learning systems were used to develop a predictive model based on a training testing crossover procedure. Results: The best machine learning system (three-layers feed-forward neural network) obtained a global accuracy of 90% ( 80 % sensitivity and 100% specificity ) with a ROC of 0.82. Conclusion: Machine learning systems coupled with radiomics show a promising potential in distinguishing benign from malign tumours in PI-RADS 3/5 areas.

Keywords: machine learning, MR prostate, PI-Rads 3, radiomics

Procedia PDF Downloads 176
2446 Friction and Wear, Including Mechanisms, Modeling,Characterization, Measurement and Testing (Bangladesh Case)

Authors: Gor Muradyan

Abstract:

The paper is about friction and wear, including mechanisms, modeling, characterization, measurement and testing case in Bangladesh. Bangladesh is a country under development, A lot of people live here, approximately 145 million. The territory of this country is very small. Therefore buildings are very close to each other. As the pipe lines are very old, and people get almost dirty water, there are a lot of ongoing projects under ADB. In those projects the contractors using HDD machines (Horizontal Directional Drilling ) and grundoburst. These machines are working underground. As ground in Bangladesh is very sludge, machine can't work relevant because of big friction in the soil. When drilling works are finished machine is pulling the pipe underground. Very often the pulling of the pipes becomes very complicated because of the friction. Therefore long section of the pipe laying can’t be done because of a big friction. In that case, additional problems rise, as well as additional work must be done. As we mentioned above it is not possible to do big section of the pipe laying because of big friction in the soil, Because of this it is coming out that contractors must do more joints, more pressure test. It is always connected with additional expenditure and losing time. This machine can pull in 75 mm to 500 mm pipes connected with the soil condition. Length is possible till 500m related how much friction it will had on the puller. As less as much it can pull. Another machine grundoburst is not working at this soil condition at all. The machine is working with air compressor. This machine are using for the smaller diameter pipes, 20 mm to 63 mm. Most of the cases these machines are being used for the installing of the house connection pipes, for making service connection. To make a friction less contractors using bigger pulling had then the pipe. It is taking down the friction, But the problem of this machine is that it can't work at sludge. Because of mentioned reasons the friction has a big mining during this kind of works. There are a lot of ways to reduce the friction. In this paper we'll introduce the ways that we have researched during our practice in Bangladesh.

Keywords: Bangladesh, friction and wear, HDD machines, reducing friction

Procedia PDF Downloads 299
2445 Analyzing the Effect of Multilingualism, Language 1, and Language 2 on Reading Comprehension

Authors: Judith Hanke

Abstract:

Due to the increase of students with reading difficulties, digital reading support with diagnostics was developed to foster the individual student's reading comprehension. The digital reading support focused on the reading comprehension of elementary school students. The digital reading packages consist of literary texts with aligned reading exercises. The number of students with German as a second language is growing in Germany. Students with multilingualism, language 1, and language 2 learn German together in school. The research's focus is on determining whether and to what extent multilingualism, language 1, and language 2 affect reading comprehension. For the methodology, an ABA design was selected for the intervention study to examine the reading support. The study was expedited from April 2023 until July 2023 and collected quantitative data of individuals, groups, and classes. It comprised a survey group (N = 58) and a control group (N = 53). The quantitative data was collected from 3 classes of 3 teachers and 47 students for all three test times. To show differences between the groups, a standardized reading comprehension test was used for the three test times, pretest, posttest, and follow-up. The standardized test consists of three subtests regarding word comprehension, sentence comprehension, and text comprehension. The main findings include that students who spoke German as their first language had the best test scores. Interestingly, students with a different language had better testing scores than students with German as the first language and (an) other language/s. Also, the students with another language outperformed the native language speakers in one of the subtests of the post-testing. The variables of spoken language at home and German as a second language were also examined and correlated with the test results. One significant correlation was found between spoken language at home and the text comprehension test of the pretesting. Additionally, the variable German as a second language had multiple significant correlations in the pretest, posttest and follow-up. The study's significance is to understand the influence of several languages, language 1, and language 2, on reading comprehension.

Keywords: multilingualism, language 1, language 2, reading comprehension, second language

Procedia PDF Downloads 12
2444 Testing of Complicated Bus Bar Protection Using Smart Testing Methodology

Authors: K. N. Dinesh Babu

Abstract:

In this paper, the protection of a complicated bus arrangement with a dual bus coupler and bus sectionalizer using low impedance differential protection applicable for very high voltages like 220kV and 400kV is discussed. In many power generation stations, several operational procedures are implemented to utilize the transfer bus as the main bus and to facilitate the maintenance of circuit breakers and current transformers (in each section) without shutting down the bay(s). Owing to this fact, the complications in operational philosophy have thrown challenges for the bus bar protection implementation. Many bus topologies allow any one of the main buses available in the station to be used as an auxiliary bus. In such a system, pre-defined precautions and procedures are made as guidelines, which are followed before assigning any bus as an auxiliary bus. The procedure involves shifting of links, changing rotary switches, insertion of test block, and so on, thereby causing unreliable operation. This kind of unreliable operation or inadvertent procedural lapse may result in the isolation of the bus bar from the grid due to the unpredictable operation of the bus bar protection relay, which is a commonly occurring phenomenon due to manual mistakes. With the sophisticated configuration and implementation of logic in modern intelligent electronic devices, the operator is free to select the transfer arrangement without sacrificing the protection required by a bus differential system for a reliable operation, and labor-intensive processes are completely eliminated. This paper deals with the procedure to test the security logic for such special scenarios using Megger make SMRT, bus bar protection relay to assure system stability and get rid of all the specific operational precautions/procedure.

Keywords: bus bar protection, by-pass isolator, blind spot, breaker failure, intelligent electronic device, end fault, bus unification, directional principle, zones of protection, breaker re-trip, under voltage security, smart megger relay tester

Procedia PDF Downloads 57
2443 Proton Irradiation Testing on Commercial Enhancement Mode GaN Power Transistor

Authors: L. Boyaci

Abstract:

Two basic equipment of electrical power subsystem of space satellites are Power Conditioning Unit (PCU) and Power Distribution Unit (PDU). Today, the main switching element used in power equipment in satellites is silicon (Si) based radiation-hardened MOSFET. GaNFETs have superior performances over MOSFETs in terms of their conduction and switching characteristics. GaNFET has started to take MOSFET’s place in many applications in industry especially by virtue of its switching performances. If GaNFET can also be used in equipment for space applications, this would be great revolution for future space power subsystem designs. In this study, the effect of proton irradiation on Gallium Nitride based power transistors was investigated. Four commercial enhancement mode GaN power transistors from Efficient Power Conversion Corporation (EPC) are irradiated with 30MeV protons while devices are switching. Flux of 8.2x10⁹ protons/cm²/s is applied for 12.5 seconds to reach ultimate fluence of 10¹¹ protons/cm². Vgs-Ids characteristics are measured and recorded for each device before, during and after irradiation. It was observed that if there would be destructive events. Proton induced permanent damage on devices is not observed. All the devices remained healthy and continued to operate. For two of these devices, further irradiation is applied with same flux for 30 minutes up to a total fluence level of 1.476x10¹³ protons/cm². We observed that GaNFETs are fully functional under this high level of radiation and no destructive events and irreversible failures took place for transistors. Results reveal that irradiated GaNFET in this experiment has radiation tolerance under proton testing and very important candidate for being one of the future power switching element in space.

Keywords: enhancement mode GaN power transistors, proton irradiation effects, radiation tolerance

Procedia PDF Downloads 140
2442 Investigations of Bergy Bits and Ship Interactions in Extreme Waves Using Smoothed Particle Hydrodynamics

Authors: Mohammed Islam, Jungyong Wang, Dong Cheol Seo

Abstract:

The Smoothed Particle Hydrodynamics (SPH) method is a novel, meshless, and Lagrangian technique based numerical method that has shown promises to accurately predict the hydrodynamics of water and structure interactions in violent flow conditions. The main goal of this study is to build confidence on the versatility of the Smoothed Particle Hydrodynamics (SPH) based tool, to use it as a complementary tool to the physical model testing capabilities and support research need for the performance evaluation of ships and offshore platforms exposed to an extreme and harsh environment. In the current endeavor, an open-sourced SPH-based tool was used and validated for modeling and predictions of the hydrodynamic interactions of a 6-DOF ship and bergy bits. The study involved the modeling of a modern generic drillship and simplified bergy bits in floating and towing scenarios and in regular and irregular wave conditions. The predictions were validated using the model-scale measurements on a moored ship towed at multiple oblique angles approaching a floating bergy bit in waves. Overall, this study results in a thorough comparison between the model scale measurements and the prediction outcomes from the SPH tool for performance and accuracy. The SPH predicted ship motions and forces were primarily within ±5% of the measurements. The velocity and pressure distribution and wave characteristics over the free surface depicts realistic interactions of the wave, ship, and the bergy bit. This work identifies and presents several challenges in preparing the input file, particularly while defining the mass properties of complex geometry, the computational requirements, and the post-processing of the outcomes.

Keywords: SPH, ship and bergy bit, hydrodynamic interactions, model validation, physical model testing

Procedia PDF Downloads 123