Search results for: two-dimensional conduction heat transfer analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30637

Search results for: two-dimensional conduction heat transfer analysis

24157 Unraveling Language Contact through Syntactic Dynamics of ‘Also’ in Hong Kong and Britain English

Authors: Xu Zhang

Abstract:

This article unveils an indicator of language contact between English and Cantonese in one of the Outer Circle Englishes, Hong Kong (HK) English, through an empirical investigation into 1000 tokens from the Global Web-based English (GloWbE) corpus, employing frequency analysis and logistic regression analysis. It is perceived that Cantonese and general Chinese are contextually marked by an integral underlying thinking pattern. Chinese speakers exhibit a reliance on semantic context over syntactic rules and lexical forms. This linguistic trait carries over to their use of English, affording greater flexibility to formal elements in constructing English sentences. The study focuses on the syntactic positioning of the focusing subjunct ‘also’, a linguistic element used to add new or contrasting prominence to specific sentence constituents. The English language generally allows flexibility in the relative position of 'also’, while there is a preference for close marking relationships. This article shifts attention to Hong Kong, where Cantonese and English converge, and 'also' finds counterparts in Cantonese ‘jaa’ and Mandarin ‘ye’. Employing a corpus-based data-driven method, we investigate the syntactic position of 'also' in both HK and GB English. The study aims to ascertain whether HK English exhibits a greater 'syntactic freedom,' allowing for a more distant marking relationship with 'also' compared to GB English. The analysis involves a random extraction of 500 samples from both HK and GB English from the GloWbE corpus, forming a dataset (N=1000). Exclusions are made for cases where 'also' functions as an additive conjunct or serves as a copulative adverb, as well as sentences lacking sufficient indication that 'also' functions as a focusing particle. The final dataset comprises 820 tokens, with 416 for GB and 404 for HK, annotated according to the focused constituent and the relative position of ‘also’. Frequency analysis reveals significant differences in the relative position of 'also' and marking relationships between HK and GB English. Regression analysis indicates a preference in HK English for a distant marking relationship between 'also' and its focused constituent. Notably, the subject and other constituents emerge as significant predictors of a distant position for 'also.' Together, these findings underscore the nuanced linguistic dynamics in HK English and contribute to our understanding of language contact. It suggests that future pedagogical practice should consider incorporating the syntactic variation within English varieties, facilitating leaners’ effective communication in diverse English-speaking environments and enhancing their intercultural communication competence.

Keywords: also, Cantonese, English, focus marker, frequency analysis, language contact, logistic regression analysis

Procedia PDF Downloads 40
24156 Positive Energy Districts in the Swedish Energy System

Authors: Vartan Ahrens Kayayan, Mattias Gustafsson, Erik Dotzauer

Abstract:

The European Union is introducing the positive energy district concept, which has the goal to reduce overall carbon dioxide emissions. Other studies have already mapped the make-up of such districts, and reviewed their definitions and where they are positioned. The Swedish energy system is unique compared to others in Europe, due to the implementation of low-carbon electricity and heat energy sources and high uptake of district heating. The goal for this paper is to start the discussion about how the concept of positive energy districts can best be applied to the Swedish context and meet their mitigation goals. To explore how these differences impact the formation of positive energy districts, two cases were analyzed for their methods and how these integrate into the Swedish energy system: a district in Uppsala with a focus on energy and another in Helsingborg with a focus on climate. The case in Uppsala uses primary energy calculations which can be critisied but take a virtual border that allows for its surrounding system to be considered. The district in Helsingborg has a complex methodology for considering the life cycle emissions of the neighborhood. It is successful in considering the energy balance on a monthly basis, but it can be problematized in terms of creating sub-optimized systems due to setting tight geographical constraints. The discussion of shaping the definitions and methodologies for positive energy districts is taking place in Europe and Sweden. We identify three pitfalls that must be avoided so that positive energy districts meet their mitigation goals in the Swedish context. The goal of pushing out fossil fuels is not relevant in the current energy system, the mismatch between summer electricity production and winter energy demands should be addressed, and further implementations should consider collaboration with the established district heating grid.

Keywords: positive energy districts, energy system, renewable energy, European Union

Procedia PDF Downloads 67
24155 Customer Satisfaction on Reliability Dimension of Service Quality in Indian Higher Education

Authors: Rajasekhar Mamilla, G. Janardhana, G. Anjan Babu

Abstract:

The present research studies analyses the students’ satisfaction with university performance regarding the reliability dimension, ability of professors and staff to perform the promised services with quality to students in the post-graduate courses offered by Sri Venkateswara University in India. The research is done with the notion that the student compares the perceived performance with prior expectations. Customer satisfaction is seen as the outcome of this comparison. The sample respondents were administered with the schedule based on the stratified random technique for this study. Statistical techniques such as factor analysis, t-test and correlation analysis were used to accomplish the respective objectives of the study.

Keywords: satisfaction, reliability, service quality, customer

Procedia PDF Downloads 536
24154 National Image in the Age of Mass Self-Communication: An Analysis of Internet Users' Perception of Portugal

Authors: L. Godinho, N. Teixeira

Abstract:

Nowadays, massification of Internet access represents one of the major challenges to the traditional powers of the State, among which the power to control its external image. The virtual world has also sparked the interest of social sciences which consider it a new field of study, an immense open text where sense is expressed. In this paper, that immense text has been accessed to so as to understand the perception Internet users from all over the world have of Portugal. Ours is a quantitative and qualitative approach, as we have resorted to buzz, thematic and category analysis. The results confirm the predominance of sea stereotype in others' vision of the Portuguese people, and evidence that national image has adapted to network communication through processes of individuation and paganization.

Keywords: national image, internet, self-communication, perception

Procedia PDF Downloads 245
24153 Exploratory Factor Analysis of Natural Disaster Preparedness Awareness of Thai Citizens

Authors: Chaiyaset Promsri

Abstract:

Based on the synthesis of related literatures, this research found thirteen related dimensions that involved the development of natural disaster preparedness awareness including hazard knowledge, hazard attitude, training for disaster preparedness, rehearsal and practice for disaster preparedness, cultural development for preparedness, public relations and communication, storytelling, disaster awareness game, simulation, past experience to natural disaster, information sharing with family members, and commitment to the community (time of living).  The 40-item of natural disaster preparedness awareness questionnaire was developed based on these thirteen dimensions. Data were collected from 595 participants in Bangkok metropolitan and vicinity. Cronbach's alpha was used to examine the internal consistency for this instrument. Reliability coefficient was 97, which was highly acceptable.  Exploratory Factor Analysis where principal axis factor analysis was employed. The Kaiser-Meyer-Olkin index of sampling adequacy was .973, indicating that the data represented a homogeneous collection of variables suitable for factor analysis. Bartlett's test of Sphericity was significant for the sample as Chi-Square = 23168.657, df = 780, and p-value < .0001, which indicated that the set of correlations in the correlation matrix was significantly different and acceptable for utilizing EFA. Factor extraction was done to determine the number of factors by using principal component analysis and varimax.  The result revealed that four factors had Eigen value greater than 1 with more than 60% cumulative of variance. Factor #1 had Eigen value of 22.270, and factor loadings ranged from 0.626-0.760. This factor was named as "Knowledge and Attitude of Natural Disaster Preparedness".  Factor #2 had Eigen value of 2.491, and factor loadings ranged from 0.596-0.696. This factor was named as "Training and Development". Factor #3 had Eigen value of 1.821, and factor loadings ranged from 0.643-0.777. This factor was named as "Building Experiences about Disaster Preparedness".  Factor #4 had Eigen value of 1.365, and factor loadings ranged from 0.657-0.760. This was named as "Family and Community". The results of this study provided support for the reliability and construct validity of natural disaster preparedness awareness for utilizing with populations similar to sample employed.

Keywords: natural disaster, disaster preparedness, disaster awareness, Thai citizens

Procedia PDF Downloads 362
24152 Improvement of Microscopic Detection of Acid-Fast Bacilli for Tuberculosis by Artificial Intelligence-Assisted Microscopic Platform and Medical Image Recognition System

Authors: Hsiao-Chuan Huang, King-Lung Kuo, Mei-Hsin Lo, Hsiao-Yun Chou, Yusen Lin

Abstract:

The most robust and economical method for laboratory diagnosis of TB is to identify mycobacterial bacilli (AFB) under acid-fast staining despite its disadvantages of low sensitivity and labor-intensive. Though digital pathology becomes popular in medicine, an automated microscopic system for microbiology is still not available. A new AI-assisted automated microscopic system, consisting of a microscopic scanner and recognition program powered by big data and deep learning, may significantly increase the sensitivity of TB smear microscopy. Thus, the objective is to evaluate such an automatic system for the identification of AFB. A total of 5,930 smears was enrolled for this study. An intelligent microscope system (TB-Scan, Wellgen Medical, Taiwan) was used for microscopic image scanning and AFB detection. 272 AFB smears were used for transfer learning to increase the accuracy. Referee medical technicians were used as Gold Standard for result discrepancy. Results showed that, under a total of 1726 AFB smears, the automated system's accuracy, sensitivity and specificity were 95.6% (1,650/1,726), 87.7% (57/65), and 95.9% (1,593/1,661), respectively. Compared to culture, the sensitivity for human technicians was only 33.8% (38/142); however, the automated system can achieve 74.6% (106/142), which is significantly higher than human technicians, and this is the first of such an automated microscope system for TB smear testing in a controlled trial. This automated system could achieve higher TB smear sensitivity and laboratory efficiency and may complement molecular methods (eg. GeneXpert) to reduce the total cost for TB control. Furthermore, such an automated system is capable of remote access by the internet and can be deployed in the area with limited medical resources.

Keywords: TB smears, automated microscope, artificial intelligence, medical imaging

Procedia PDF Downloads 205
24151 Determination of Optimum Torque of an Internal Combustion Engine by Exergy Analysis

Authors: Veena Chaudhary, Rakesh P. Gakkhar

Abstract:

In this study, energy and exergy analysis are applied to the experimental data of an internal combustion engine operating on conventional diesel cycle. The experimental data are collected using an engine unit which enables accurate measurements of fuel flow rate, combustion air flow rate, engine load, engine speed and all relevant temperatures. First and second law efficiencies are calculated for different engine speed and compared. Results indicate that the first law (energy) efficiency is maximum at 1700 rpm whereas exergy efficiency is maximum and exergy destruction is minimum at 1900 rpm.

Keywords: diesel engine, exergy destruction, exergy efficiency, second law of thermodynamics

Procedia PDF Downloads 313
24150 The Possibility of Solving a 3x3 Rubik’s Cube under 3 Seconds

Authors: Chung To Kong, Siu Ming Yiu

Abstract:

Rubik's cube was invented in 1974. Since then, speedcubers all over the world try their best to break the world record again and again. The newest record is 3.47 seconds. There are many factors that affect the timing, including turns per second (tps), algorithm, finger trick, hardware of the cube. In this paper, the lower bound of the cube solving time will be discussed using convex optimization. Extended analysis of the world records will be used to understand how to improve the timing. With the understanding of each part of the solving step, the paper suggests a list of speed improvement techniques. Based on the analysis of the world record, there is a high possibility that the 3 seconds mark will be broken soon.

Keywords: Rubik's Cube, speed, finger trick, optimization

Procedia PDF Downloads 191
24149 Fabrication of Nanoengineered Radiation Shielding Multifunctional Polymeric Sandwich Composites

Authors: Nasim Abuali Galehdari, Venkat Mani, Ajit D. Kelkar

Abstract:

Space Radiation has become one of the major factors in successful long duration space exploration. Exposure to space radiation not only can affect the health of astronauts but also can disrupt or damage materials and electronics. Hazards to materials include degradation of properties, such as, modulus, strength, or glass transition temperature. Electronics may experience single event effects, gate rupture, burnout of field effect transistors and noise. Presently aluminum is the major component in most of the space structures due to its lightweight and good structural properties. However, aluminum is ineffective at blocking space radiation. Therefore, most of the past research involved studying at polymers which contain large amounts of hydrogen. Again, these materials are not structural materials and would require large amounts of material to achieve the structural properties needed. One of the materials to alleviate this problem is polymeric composite materials, which has good structural properties and use polymers that contained large amounts of hydrogen. This paper presents steps involved in fabrication of multi-functional hybrid sandwich panels that can provide beneficial radiation shielding as well as structural strength. Multifunctional hybrid sandwich panels were manufactured using vacuum assisted resin transfer molding process and were subjected to radiation treatment. Study indicates that various nanoparticles including Boron Nano powder, Boron Carbide and Gadolinium nanoparticles can be successfully used to block the space radiation without sacrificing the structural integrity.

Keywords: multi-functional, polymer composites, radiation shielding, sandwich composites

Procedia PDF Downloads 266
24148 Understanding the Complex Relationship Between Economic Independency and Intimate Partner Violence by Applying a Socio-Ecological Analysis Framework

Authors: Suzanne Bouma

Abstract:

In the Netherlands, the assumed causal relationship between employment, economic independence and individual freedom of choice has been extended to the approach of intimate partner violence (IPV). In the interests of combating IPV, it is crucial to further investigate this relationship. Based on a literature review, this article shows that the relationship between economic independence and IPV is highly complex. To unravel this complex relationship, a socio-ecological analysis framework has been applied. First, it is a layered relation, in where employment does not necessarily lead to economic independence, which can be explained by social inequalities. Second, the relation is bidirectional, where women do not by definition have access to their own financial recourses due to tactics of financial control by the intimate partner. This reveals the coexistence of IPV and economic abuse and the extent to which an intimate relationship affects the scope for individual choice. Third, there is a paradoxical relationship in which employment is both a protective and risk factor for IPV. This, in turn, cannot be separated from traditional norms about masculinity and femininity, where men occupy a position of power and derive status from being the breadwinner. These findings imply that not only the approach to IPV but also the labor market policy requires a gender-sensitive approach.

Keywords: intimate partner violence, economic independence, literature review, socio-ecological analysis framework

Procedia PDF Downloads 215
24147 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy

Authors: Kemal Efe Eseller, Göktuğ Yazici

Abstract:

Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.

Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing

Procedia PDF Downloads 73
24146 Bio-Hub Ecosystems: Profitability through Circularity for Sustainable Forestry, Energy, Agriculture and Aquaculture

Authors: Kimberly Samaha

Abstract:

The Bio-Hub Ecosystem model was developed to address a critical area of concern within the global energy market regarding biomass as a feedstock for power plants. Yet the lack of an economically-viable business model for bioenergy facilities has resulted in the continuation of idled and decommissioned plants. This study analyzed data and submittals to the Born Global Maine Innovation Challenge. The Innovation Challenge was a global innovation challenge to identify process innovations that could address a ‘whole-tree’ approach of maximizing the products, byproducts, energy value and process slip-streams into a circular zero-waste design. Participating companies were at various stages of developing bioproducts and included biofuels, lignin-based products, carbon capture platforms and biochar used as both a filtration medium and as a soil amendment product. This case study shows the QCA (Qualitative Comparative Analysis) methodology of the prequalification process and the resulting techno-economic model that was developed for the maximizing profitability of the Bio-Hub Ecosystem through continuous expansion of system waste streams into valuable process inputs for co-hosts. A full site plan for the integration of co-hosts (biorefinery, land-based shrimp and salmon aquaculture farms, a tomato green-house and a hops farm) at an operating forestry-based biomass to energy plant in West Enfield, Maine USA. This model and process for evaluating the profitability not only proposes models for integration of forestry, aquaculture and agriculture in cradle-to-cradle linkages of what have typically been linear systems, but the proposal also allows for the early measurement of the circularity and impact of resource use and investment risk mitigation, for these systems. In this particular study, profitability is assessed at two levels CAPEX (Capital Expenditures) and in OPEX (Operating Expenditures). Given that these projects start with repurposing facilities where the industrial level infrastructure is already built, permitted and interconnected to the grid, the addition of co-hosts first realizes a dramatic reduction in permitting, development times and costs. In addition, using the biomass energy plant’s waste streams such as heat, hot water, CO₂ and fly ash as valuable inputs to their operations and a significant decrease in the OPEX costs, increasing overall profitability to each of the co-hosts bottom line. This case study utilizes a proprietary techno-economic model to demonstrate how utilizing waste streams of a biomass energy plant and/or biorefinery, results in significant reduction in OPEX for both the biomass plants and the agriculture and aquaculture co-hosts. Economically viable Bio-Hubs with favorable environmental and community impacts may prove critical in garnering local and federal government support for pilot programs and more wide-scale adoption, especially for those living in severely economically depressed rural areas where aging industrial sites have been shuttered and local economies devastated.

Keywords: bio-economy, biomass energy, financing, zero-waste

Procedia PDF Downloads 118
24145 Analysis study According Some of Physical and Mechanical Variables for Joint Wrist Injury

Authors: Nabeel Abdulkadhim Athab

Abstract:

The purpose of this research is to conduct a comparative study according analysis of programmed to some of physical and mechanical variables for joint wrist injury. As it can be through this research to distinguish between the amount of variation in the work of the joint after sample underwent rehabilitation program to improve the effectiveness of the joint and naturally restore its effectiveness. Supposed researcher that there is statistically significant differences between the results of the tests pre and post the members research sample, as a result of submission the sample to the program of rehabilitation, which led to the development of muscle activity that are working on wrist joint and this is what led to note the differences between the results of the tests pre and post. The researcher used the descriptive method. The research sample included (6) of injured players in the wrist joint, as the average age (21.68) and standard deviation (1.13) either length average (178cm) and standard deviation (2.08). And the sample as evidenced homogeneous among themselves. And where the data were collected, introduced in program for statistical processing to get to the most important conclusions and recommendations and that the most important: 1-The commitment of the sample program the qualifying process variables studied in the search for the heterogeneity of study activity and effectiveness of wrist joint for injured players. 2-The analysis programmed a high accuracy in the measurement of the research variables, and which led to the possibility of discrimination into account differences in motor ability camel and injured in the wrist joint. To search recommendations including: 1-The use of computer systems in the scientific research for the possibility of obtaining accurate research results. 2-Programming exercises rehabilitation according to an expert system for possible use by patients without reference to the person processor.

Keywords: analysis of joint wrist injury, physical and mechanical variables, wrist joint, wrist injury

Procedia PDF Downloads 420
24144 Qualitative and Quantitative Analysis of Motivation Letters to Model Turnover in Non-Governmental Organization

Authors: A. Porshnev, A. Zaporozhtchuk

Abstract:

Motivation regarded as a key factor of labor turnover, is especially important for volunteers working on an altruistic basis in NGO. Despite the motivational letter, candidate selection depends on the impression of the selection committee, which can be subject to human bias. We expect that structured and unstructured information provided in motivation letters could be used to improve candidate selection procedures. In our paper, we perform qualitative and quantitative analysis of 2280 motivation letters, create logistic regression, and build a decision tree to improve selection procedures. Our analysis showed that motivation factors are significant and enable human resources department to forecast labor turnover and provide extra information to demographic, professional and timing questions. In spite of the average level of accuracy the model demonstrates the selection procedures of company of under consideration can be improved. We also discuss interrelation between answers to open and closed motivation questions, recommend changes in motivational letter templates to ensure more relevant information about applicants and further steps to create more accurate model.

Keywords: decision trees, logistic regression, model, motivational letter, non-governmental organization, retention, turnover

Procedia PDF Downloads 161
24143 Absorption and Carrier Transport Properties of Doped Hematite

Authors: Adebisi Moruf Ademola

Abstract:

Hematite (Fe2O3),commonly known as ‘rust’ which usually surfaced on metal when exposed to some climatic materials. This emerges as a promising candidate for photoelectrochemical (PEC) water splitting due to its favorable physiochemical properties of the narrow band gap (2.1–2.2 eV), chemical stability, nontoxicity, abundance, and low cost. However, inherent limitations such as short hole diffusion length (2–4 nm), high charge recombination rate, and slow oxygen evolution reaction kinetics inhibit the PEC performances of a-Fe2O3 photoanodes. As such, given the narrow bandgap enabling excellent optical absorption, increased charge carrier density and accelerated surface oxidation reaction kinetics become the key points for improved photoelectrochemical performances for a-Fe2O3 photoanodes and metal ion doping as an effective way to promote charge transfer by increasing donor density and improving the electronic conductivity of a-Fe2O3. Hematite attracts enormous efforts with a number of metal ions (Ti, Zr, Sn, Pt ,etc.) as dopants. A facile deposition-annealing process showed greatly enhanced PEC performance due to the increased donor density and reduced electron-hole recombination at the time scale beyond a few picoseconds. Zr doping was also found to enhance the PEC performance of a-Fe2O3 nanorod arrays by reducing the rate of electron-hole recombination. Slow water oxidation reaction kinetics, another main factor limiting the PEC water splitting efficiency of aFe2O3 as photoanodes, was previously found to be effectively improved by surface treatment.

Keywords: deposition-annealing, hematite, metal ion doping, nanorod

Procedia PDF Downloads 209
24142 Microstructure Evolution and Pre-transformation Microstructure Reconstruction in Ti-6Al-4V Alloy

Authors: Shreyash Hadke, Manendra Singh Parihar, Rajesh Khatirkar

Abstract:

In the present investigation, the variation in the microstructure with the changes in the heat treatment conditions i.e. temperature and time was observed. Ti-6Al-4V alloy was subject to solution annealing treatments in β (1066C) and α+β phase (930C and 850C) followed by quenching, air cooling and furnace cooling to room temperature respectively. The effect of solution annealing and cooling on the microstructure was studied by using optical microscopy (OM), scanning electron microscopy (SEM), electron backscattered diffraction (EBSD) and x-ray diffraction (XRD). The chemical composition of the β phase for different conditions was determined with the help of energy dispersive spectrometer (EDS) attached to SEM. Furnace cooling resulted in the development of coarser structure (α+β), while air cooling resulted in much finer structure with widmanstatten morphology of α at the grain boundaries. Quenching from solution annealing temperature formed α’ martensite, their proportion being dependent on the temperature in β phase field. It is well known that the transformation of β to α follows Burger orientation relationship (OR). In order to reconstruct the microstructure of parent β phase, a MATLAB code was written using neighbor-to-neighbor, triplet method and Tari’s method. The code was tested on the annealed samples (1066C solution annealing temperature followed by furnace cooling to room temperature). The parent phase data thus generated was then plotted using the TSL-OIM software. The reconstruction results of the above methods were compared and analyzed. The Tari’s approach (clustering approach) gave better results compared to neighbor-to-neighbor and triplet method but the time taken by the triplet method was least compared to the other two methods.

Keywords: Ti-6Al-4V alloy, microstructure, electron backscattered diffraction, parent phase reconstruction

Procedia PDF Downloads 435
24141 Review of Sulfur Unit Capacity Expansion Options

Authors: Avinashkumar Karre

Abstract:

Sulfur recovery unit, most commonly called as Claus process, is very significant gas desulfurization process unit in refinery and gas industries. Explorations of new natural gas fields, refining of high-sulfur crude oils, and recent crude expansion projects are needing capacity expansion of Claus unit for many companies around the world. In refineries, the sulphur recovery units take acid gas from amine regeneration units and sour water strippers, converting hydrogen sulfide to elemental sulfur using the Claus process. The Claus process is hydraulically limited by mass flow rate. Reducing the pressure drop across control valves, flow meters, lines, knock-out drums, and packing improves the capacity. Oxygen enrichment helps improve the capacity by removing nitrogen, this is more commonly done on all capacity expansion projects. Typical upgrades required due to oxygen enrichment are new burners, new refractory in thermal reactor, resizing of 1st condenser, instrumentation changes, and steam/condensate heat integration. Some other capacity expansion options typically considered are tail gas compressor, replacing air blower with higher head, hydrocarbon minimization in the feed, water removal, and ammonia removal. Increased capacity related upgrades in sulfur recovery unit also need changes in the tail gas treatment unit, typical changes include improvement to quench tower duty, packing area upgrades in quench and absorber towers and increased amine circulation flow rates.

Keywords: Claus process, oxygen enrichment, sulfur recovery unit, tail gas treatment unit

Procedia PDF Downloads 109
24140 Design and Analysis of Active Rocket Control Systems

Authors: Piotr Jerzy Rugor, Julia Wajoras

Abstract:

The presented work regards a single-stage aerodynamically controlled solid propulsion rocket. Steering a rocket to fly along a predetermined trajectory can be beneficial for minimizing aerodynamic losses and achieved by implementing an active control system on board. In this particular case, a canard configuration has been chosen, although other methods of control have been considered and preemptively analyzed, including non-aerodynamic ones. The objective of this work is to create a system capable of guiding the rocket, focusing on roll stabilization. The paper describes initial analysis of the problem, covers the main challenges of missile guidance and presents data acquired during the experimental study.

Keywords: active canard control system, rocket design, numerical simulations, flight optimization

Procedia PDF Downloads 181
24139 Application Difference between Cox and Logistic Regression Models

Authors: Idrissa Kayijuka

Abstract:

The logistic regression and Cox regression models (proportional hazard model) at present are being employed in the analysis of prospective epidemiologic research looking into risk factors in their application on chronic diseases. However, a theoretical relationship between the two models has been studied. By definition, Cox regression model also called Cox proportional hazard model is a procedure that is used in modeling data regarding time leading up to an event where censored cases exist. Whereas the Logistic regression model is mostly applicable in cases where the independent variables consist of numerical as well as nominal values while the resultant variable is binary (dichotomous). Arguments and findings of many researchers focused on the overview of Cox and Logistic regression models and their different applications in different areas. In this work, the analysis is done on secondary data whose source is SPSS exercise data on BREAST CANCER with a sample size of 1121 women where the main objective is to show the application difference between Cox regression model and logistic regression model based on factors that cause women to die due to breast cancer. Thus we did some analysis manually i.e. on lymph nodes status, and SPSS software helped to analyze the mentioned data. This study found out that there is an application difference between Cox and Logistic regression models which is Cox regression model is used if one wishes to analyze data which also include the follow-up time whereas Logistic regression model analyzes data without follow-up-time. Also, they have measurements of association which is different: hazard ratio and odds ratio for Cox and logistic regression models respectively. A similarity between the two models is that they are both applicable in the prediction of the upshot of a categorical variable i.e. a variable that can accommodate only a restricted number of categories. In conclusion, Cox regression model differs from logistic regression by assessing a rate instead of proportion. The two models can be applied in many other researches since they are suitable methods for analyzing data but the more recommended is the Cox, regression model.

Keywords: logistic regression model, Cox regression model, survival analysis, hazard ratio

Procedia PDF Downloads 438
24138 Shooting Gas Cylinders to Prevent Their Explosion in Fire

Authors: Jerzy Ejsmont, Beata Świeczko-Żurek, Grzegorz Ronowski

Abstract:

Gas cylinders in general and particularly cylinders containing acetylene constitute a great potential danger for fire and rescue services involved in salvage operations. Experiments show that gas cylinders with acetylene, oxygen, hydrogen, CNG, LPG or CO2 may blow after short exposition to heat with very destructive effect as fragments of blown cylinder may fly even several hundred meters. In the case of acetylene, the explosion may occur also several hours after the cylinder is cooled down. One of the possible neutralization procedures that in many cases may be used to prevent explosions is shooting dangerous cylinders by rifle bullets. This technique is used to neutralize acetylene cylinders in a few European countries with great success. In Poland research project 'BLOW' was launched in 2014 with the aim to investigate phenomena related to fire influence on industrial and home used cylinders and to evaluate usefulness of the shooting technique. All together over 100 gas cylinders with different gases were experimentally tested at the military blasting grounds and in shelters. During the experiments cylinder temperature and pressure were recorded. In the case of acetylene that is subjected to thermal decomposition also concentration of hydrogen was monitored. Some of the cylinders were allowed to blow and others were shot by snipers. It was observed that shooting hot cylinders has never created more dangerous situations than letting the cylinders to explode spontaneously. In a great majority of cases cylinders that were punctured by bullets released gas in a more or less violent but relatively safe way. The paper presents detailed information about experiments and presents particularities of behavior of cylinders containing different gases. Extensive research was also done in order to select bullets that may be safely and efficiently used to puncture different cylinders. The paper shows also results of those experiments as well as gives practical information related to techniques that should be used during shooting.

Keywords: fire, gas cylinders, neutralization, shooting

Procedia PDF Downloads 249
24137 Diversity and Equality in Four Finnish and Italian Energy Companies' Open Access Material

Authors: Elisa Bertagna

Abstract:

A frame analysis of the work done by various energy multinational companies concerning diversity issues and gender equality is presented. Documents of four multinational companies - two from Finland and two from Italy - have been studied. The array of companies’ documents includes data from their websites, policies and so on. The Finnish and Italian contexts have been chosen as a sample of North and South Europe, of 'advanced' and 'less advanced'. The aim of the analysis is to understand if and how human resource and diversity management in Finnish and Italian multinational energy companies communicate their activity towards the employees. Attention is given on how employees are reacting in their role and on the consequences of its social positioning. The findings of this essay are crucially important. They show how the companies in object tend to focus on the HR and DM positive actions towards female employees’ struggles since the industry is characterized by multinationals with male-dominated employees. In this way, other categories, which are also depicted as sensitive such as young and elderly people or foreigners, do not receive the same amount of attention. Consequently, power hierarchies can be found: 'women' as a social category are given more importance and space in the companies’ data than others. Consequently, the present work analysis reflects on possible struggles that such companies might be facing concerning gender biases and further diverse issues.

Keywords: energy, diversity, gender, multinationals, power hierarchies

Procedia PDF Downloads 130
24136 Discrimination of Bio-Analytes by Using Two-Dimensional Nano Sensor Array

Authors: P. Behera, K. K. Singh, D. K. Saini, M. De

Abstract:

Implementation of 2D materials in the detection of bio analytes is highly advantageous in the field of sensing because of its high surface to volume ratio. We have designed our sensor array with different cationic two-dimensional MoS₂, where surface modification was achieved by cationic thiol ligands with different functionality. Green fluorescent protein (GFP) was chosen as signal transducers for its biocompatibility and anionic nature, which can bind to the cationic MoS₂ surface easily, followed by fluorescence quenching. The addition of bio-analyte to the sensor can decomplex the cationic MoS₂ and GFP conjugates, followed by the regeneration of GFP fluorescence. The fluorescence response pattern belongs to various analytes collected and transformed to linear discriminant analysis (LDA) for classification. At first, 15 different proteins having wide range of molecular weight and isoelectric points were successfully discriminated at 50 nM with detection limit of 1 nM. The sensor system was also executed in biofluids such as serum, where 10 different proteins at 2.5 μM were well separated. After successful discrimination of protein analytes, the sensor array was implemented for bacteria sensing. Six different bacteria were successfully classified at OD = 0.05 with a detection limit corresponding to OD = 0.005. The optimized sensor array was able to classify uropathogens from non-uropathogens in urine medium. Further, the technique was applied for discrimination of bacteria possessing resistance to different types and amounts of drugs. We found out the mechanism of sensing through optical and electrodynamic studies, which indicates the interaction between bacteria with the sensor system was mainly due to electrostatic force of interactions, but the separation of native bacteria from their drug resistant variant was due to Van der Waals forces. There are two ways bacteria can be detected, i.e., through bacterial cells and lysates. The bacterial lysates contain intracellular information and also safe to analysis as it does not contain live cells. Lysates of different drug resistant bacteria were patterned effectively from the native strain. From unknown sample analysis, we found that discrimination of bacterial cells is more sensitive than that of lysates. But the analyst can prefer bacterial lysates over live cells for safer analysis.

Keywords: array-based sensing, drug resistant bacteria, linear discriminant analysis, two-dimensional MoS₂

Procedia PDF Downloads 129
24135 Correlation Mapping for Measuring Platelet Adhesion

Authors: Eunseop Yeom

Abstract:

Platelets can be activated by the surrounding blood flows where a blood vessel is narrowed as a result of atherosclerosis. Numerous studies have been conducted to identify the relation between platelets activation and thrombus formation. To measure platelet adhesion, this study proposes an image analysis technique. Blood samples are delivered in the microfluidic channel, and then platelets are activated by a stenotic micro-channel with 90% severity. By applying proposed correlation mapping, which visualizes decorrelation of the streaming blood flow, the area of adhered platelets (APlatelet) was estimated without labeling platelets. In order to evaluate the performance of correlation mapping on the detection of platelet adhesion, the effect of tile size was investigated by calculating 2D correlation coefficients with binary images obtained by manual labeling and the correlation mapping method with different sizes of the square tile ranging from 3 to 50 pixels. The maximum 2D correlation coefficient is observed with the optimum tile size of 5×5 pixels. As the area of the platelet adhesion increases, the platelets plug the channel and there is only a small amount of blood flows. This image analysis could provide new insights for better understanding of the interactions between platelet aggregation and blood flows in various physiological conditions.

Keywords: platelet activation, correlation coefficient, image analysis, shear rate

Procedia PDF Downloads 323
24134 Window Analysis and Malmquist Index for Assessing Efficiency and Productivity Growth in a Pharmaceutical Industry

Authors: Abbas Al-Refaie, Ruba Najdawi, Nour Bata, Mohammad D. AL-Tahat

Abstract:

The pharmaceutical industry is an important component of health care systems throughout the world. Measurement of a production unit-performance is crucial in determining whether it has achieved its objectives or not. This paper applies data envelopment (DEA) window analysis to assess the efficiencies of two packaging lines; Allfill (new) and DP6, in the Penicillin plant in a Jordanian Medical Company in 2010. The CCR and BCC models are used to estimate the technical efficiency, pure technical efficiency, and scale efficiency. Further, the Malmquist productivity index is computed to measure then employed to assess productivity growth relative to a reference technology. Two primary issues are addressed in computation of Malmquist indices of productivity growth. The first issue is the measurement of productivity change over the period, while the second is to decompose changes in productivity into what are generally referred to as a ‘catching-up’ effect (efficiency change) and a ‘frontier shift’ effect (technological change). Results showed that DP6 line outperforms the Allfill in technical and pure technical efficiency. However, the Allfill line outperforms DP6 line in scale efficiency. The obtained efficiency values can guide production managers in taking effective decisions related to operation, management, and plant size. Moreover, both machines exhibit a clear fluctuations in technological change, which is the main reason for the positive total factor productivity change. That is, installing a new Allfill production line can be of great benefit to increasing productivity. In conclusions, the DEA window analysis combined with the Malmquist index are supportive measures in assessing efficiency and productivity in pharmaceutical industry.

Keywords: window analysis, malmquist index, efficiency, productivity

Procedia PDF Downloads 596
24133 Analysis of Epileptic Electroencephalogram Using Detrended Fluctuation and Recurrence Plots

Authors: Mrinalini Ranjan, Sudheesh Chethil

Abstract:

Epilepsy is a common neurological disorder characterised by the recurrence of seizures. Electroencephalogram (EEG) signals are complex biomedical signals which exhibit nonlinear and nonstationary behavior. We use two methods 1) Detrended Fluctuation Analysis (DFA) and 2) Recurrence Plots (RP) to capture this complex behavior of EEG signals. DFA considers fluctuation from local linear trends. Scale invariance of these signals is well captured in the multifractal characterisation using detrended fluctuation analysis (DFA). Analysis of long-range correlations is vital for understanding the dynamics of EEG signals. Correlation properties in the EEG signal are quantified by the calculation of a scaling exponent. We report the existence of two scaling behaviours in the epileptic EEG signals which quantify short and long-range correlations. To illustrate this, we perform DFA on extant ictal (seizure) and interictal (seizure free) datasets of different patients in different channels. We compute the short term and long scaling exponents and report a decrease in short range scaling exponent during seizure as compared to pre-seizure and a subsequent increase during post-seizure period, while the long-term scaling exponent shows an increase during seizure activity. Our calculation of long-term scaling exponent yields a value between 0.5 and 1, thus pointing to power law behaviour of long-range temporal correlations (LRTC). We perform this analysis for multiple channels and report similar behaviour. We find an increase in the long-term scaling exponent during seizure in all channels, which we attribute to an increase in persistent LRTC during seizure. The magnitude of the scaling exponent and its distribution in different channels can help in better identification of areas in brain most affected during seizure activity. The nature of epileptic seizures varies from patient-to-patient. To illustrate this, we report an increase in long-term scaling exponent for some patients which is also complemented by the recurrence plots (RP). RP is a graph that shows the time index of recurrence of a dynamical state. We perform Recurrence Quantitative analysis (RQA) and calculate RQA parameters like diagonal length, entropy, recurrence, determinism, etc. for ictal and interictal datasets. We find that the RQA parameters increase during seizure activity, indicating a transition. We observe that RQA parameters are higher during seizure period as compared to post seizure values, whereas for some patients post seizure values exceeded those during seizure. We attribute this to varying nature of seizure in different patients indicating a different route or mechanism during the transition. Our results can help in better understanding of the characterisation of epileptic EEG signals from a nonlinear analysis.

Keywords: detrended fluctuation, epilepsy, long range correlations, recurrence plots

Procedia PDF Downloads 164
24132 Process Parameter Study on Friction Push Plug Welding of AA6061 Alloy

Authors: H. Li, W. Qin, Ben Ye

Abstract:

Friction Push Plug Welding (FPPW) is a solid phase welding suitable for repairing defective welds and filling self-reacting weld keyholes in Friction Stir Welds. In FPPW process, a tapered shaped plug is rotated at high speed and forced into a tapered hole in the substrate. The plug and substrate metal is softened by the increasing temperature generated by friction and material plastic deformation. This paper aims to investigate the effect of process parameters on the quality of the weld. Orthogonal design methods were employed to reduce the amount of experiment. Three values were selected for each process parameter, rotation speed (1500r/min, 2000r/min, 2500r/min), plunge depth (2mm, 3mm, 4mm) and plunge speed (60mm/min, 90mm/min, 120r/min). AA6061aluminum alloy plug and substrate plate was used in the experiment. In a trial test with the plunge depth of 1mm, a noticeable defect appeared due to the short plunge time and insufficient temperature. From the recorded temperature profiles, it was found that the peak temperature increased with the increase of the rotation speed, plunge speed and plunge depth. In the initial stage, the plunge speed was the main factor affecting heat generation, while in the steady state welding stage, the rotation speed played a more important role. The FPPW weld defect includes flash and incomplete penetration in the upper, middle and bottom interface with the substrate. To obtain defect free weld, the higher rotation speed and proper plunge depth were recommended.

Keywords: friction push plug welding, process parameter, weld defect, orthogonal design

Procedia PDF Downloads 134
24131 Assessment of the Road Safety Performance in National Scale

Authors: Abeer K. Jameel, Harry Evdorides

Abstract:

The Assessment of the road safety performance is a challengeable issue. This is not only because of the ineffective and unreliability of road and traffic crash data system but also because of its systematic character. Recent strategic plans and interventions implemented in some of the developed countries where a significant decline in the rate of traffic and road crashes considers that the road safety is a system. This system consists of four main elements which are: road user, road infrastructure, vehicles and speed in addition to other supporting elements such as the institutional framework and post-crash care system. To assess the performance of a system, it is required to assess all its elements. To present an understandable results of the assessment, it is required to present a unique term representing the performance of the overall system. This paper aims to develop an overall performance indicator which may be used to assess the road safety system. The variables of this indicators are the main elements of the road safety system. The data regarding these variables will be collected from the World Health Organization report. Multi-criteria analysis method is used to aggregate the four sub-indicators for the four variables. Two weighting methods will be assumed, equal weights and different weights. For the different weights method, the factor analysis method is used. The weights then will be converting to scores. The total score will be the overall indicator for the road safety performance in a national scale. This indicator will be used to compare and rank countries according to their road safety performance indicator. The country with the higher score is the country which provides most sustainable and effective interventions for successful road safety system. These indicator will be tested by comparing them with the aggregate real crash rate for each country.

Keywords: factor analysis, Multi-criteria analysis, road safety assessment, safe system indicator

Procedia PDF Downloads 259
24130 Analyzing Environmental Emotive Triggers in Terrorist Propaganda

Authors: Travis Morris

Abstract:

The purpose of this study is to measure the intersection of environmental security entities in terrorist propaganda. To the best of author’s knowledge, this is the first study of its kind to examine this intersection within terrorist propaganda. Rosoka, natural language processing software and frame analysis are used to advance our understanding of how environmental frames function as emotive triggers. Violent jihadi demagogues use frames to suggest violent and non-violent solutions to their grievances. Emotive triggers are framed in a way to leverage individual and collective attitudes in psychological warfare. A comparative research design is used because of the differences and similarities that exist between two variants of violent jihadi propaganda that target western audiences. Analysis is based on salience and network text analysis, which generates violent jihadi semantic networks. Findings indicate that environmental frames are used as emotive triggers across both data sets, but also as tactical and information data points. A significant finding is that certain core environmental emotive triggers like “water,” “soil,” and “trees” are significantly salient at the aggregate level across both data sets. All environmental entities can be classified into two categories, symbolic and literal. Importantly, this research illustrates how demagogues use environmental emotive triggers in cyber space from a subcultural perspective to mobilize target audiences to their ideology and praxis. Understanding the anatomy of propaganda construction is necessary in order to generate effective counter narratives in information operations. This research advances an additional method to inform practitioners and policy makers of how environmental security and propaganda intersect.

Keywords: propaganda analysis, emotive triggers environmental security, frames

Procedia PDF Downloads 126
24129 Finding Optimal Operation Condition in a Biological Nutrient Removal Process with Balancing Effluent Quality, Economic Cost and GHG Emissions

Authors: Seungchul Lee, Minjeong Kim, Iman Janghorban Esfahani, Jeong Tai Kim, ChangKyoo Yoo

Abstract:

It is hard to maintain the effluent quality of the wastewater treatment plants (WWTPs) under with fixed types of operational control because of continuously changed influent flow rate and pollutant load. The aims of this study is development of multi-loop multi-objective control (ML-MOC) strategy in plant-wide scope targeting four objectives: 1) maximization of nutrient removal efficiency, 2) minimization of operational cost, 3) maximization of CH4 production in anaerobic digestion (AD) for CH4 reuse as a heat source and energy source, and 4) minimization of N2O gas emission to cope with global warming. First, benchmark simulation mode is modified to describe N2O dynamic in biological process, namely benchmark simulation model for greenhouse gases (BSM2G). Then, three types of single-loop proportional-integral (PI) controllers for DO controller, NO3 controller, and CH4 controller are implemented. Their optimal set-points of the controllers are found by using multi-objective genetic algorithm (MOGA). Finally, multi loop-MOC in BSM2G is implemented and evaluated in BSM2G. Compared with the reference case, the ML-MOC with the optimal set-points showed best control performances than references with improved performances of 34%, 5% and 79% of effluent quality, CH4 productivity, and N2O emission respectively, with the decrease of 65% in operational cost.

Keywords: Benchmark simulation model for greenhouse gas, multi-loop multi-objective controller, multi-objective genetic algorithm, wastewater treatment plant

Procedia PDF Downloads 488
24128 Wrinkling Prediction of Membrane Composite of Varying Orientation under In-Plane Shear

Authors: F. Sabri, J. Jamali

Abstract:

In this article, the wrinkling failure of orthotropic composite membranes due to in-plane shear deformation is investigated using nonlinear finite element analyses. A nonlinear post-buckling analysis is performed to show the evolution of shear-induced wrinkles. The method of investigation is based on the post-buckling finite element analysis adopted from commercial FEM code; ANSYS. The resulting wrinkling patterns, their amplitude and their wavelengths under the prescribed loads and boundary conditions were confirmed by experimental results. Our study reveals that wrinkles develop when both the magnitudes and coverage of the minimum principal stresses in the laminated composite laminates are sufficiently large to trigger wrinkling.

Keywords: composite, FEM, membrane, wrinkling

Procedia PDF Downloads 257