Search results for: parameter intervals
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2586

Search results for: parameter intervals

336 Numerical Study of the Breakdown of Surface Divergence Based Models for Interfacial Gas Transfer Velocity at Large Contamination Levels

Authors: Yasemin Akar, Jan G. Wissink, Herlina Herlina

Abstract:

The effect of various levels of contamination on the interfacial air–water gas transfer velocity is studied by Direct Numerical Simulation (DNS). The interfacial gas transfer is driven by isotropic turbulence, introduced at the bottom of the computational domain, diffusing upwards. The isotropic turbulence is generated in a separate, concurrently running the large-eddy simulation (LES). The flow fields in the main DNS and the LES are solved using fourth-order discretisations of convection and diffusion. To solve the transport of dissolved gases in water, a fifth-order-accurate WENO scheme is used for scalar convection combined with a fourth-order central discretisation for scalar diffusion. The damping effect of the surfactant contamination on the near surface (horizontal) velocities in the DNS is modelled using horizontal gradients of the surfactant concentration. An important parameter in this model, which corresponds to the level of contamination, is ReMa⁄We, where Re is the Reynolds number, Ma is the Marangoni number, and We is the Weber number. It was previously found that even small levels of contamination (ReMa⁄We small) lead to a significant drop in the interfacial gas transfer velocity KL. It is known that KL depends on both the Schmidt number Sc (ratio of the kinematic viscosity and the gas diffusivity in water) and the surface divergence β, i.e. K_L∝√(β⁄Sc). Previously it has been shown that this relation works well for surfaces with low to moderate contamination. However, it will break down for β close to zero. To study the validity of this dependence in the presence of surface contamination, simulations were carried out for ReMa⁄We=0,0.12,0.6,1.2,6,30 and Sc = 2, 4, 8, 16, 32. First, it will be shown that the scaling of KL with Sc remains valid also for larger ReMa⁄We. This is an important result that indicates that - for various levels of contamination - the numerical results obtained at low Schmidt numbers are also valid for significantly higher and more realistic Sc. Subsequently, it will be shown that - with increasing levels of ReMa⁄We - the dependency of KL on β begins to break down as the increased damping of near surface fluctuations results in an increased damping of β. Especially for large levels of contamination, this damping is so severe that KL is found to be underestimated significantly.

Keywords: contamination, gas transfer, surfactants, turbulence

Procedia PDF Downloads 275
335 Thickness-Tunable Optical, Magnetic, and Dielectric Response of Lithium Ferrite Thin Film Synthesized by Pulsed Laser Deposition

Authors: Prajna Paramita Mohapatra, Pamu Dobbidi

Abstract:

Lithium ferrite (LiFe5O8) has potential applications as a component of microwave magnetic devices such as circulators and monolithic integrated circuits. For efficient device applications, spinel ferrites in the form of thin films are highly required. It is necessary to improve their magnetic and dielectric behavior by optimizing the processing parameters during deposition. The lithium ferrite thin films are deposited on Pt/Si substrate using the pulsed laser deposition technique (PLD). As controlling the film thickness is the easiest parameter to tailor the strain, we deposited the thin films having different film thicknesses (160 nm, 200 nm, 240 nm) at oxygen partial pressure of 0.001 mbar. The formation of single phase with spinel structure (space group - P4132) is confirmed by the XRD pattern and the Rietveld analysis. The optical bandgap is decreased with the increase in thickness. FESEM confirmed the formation of uniform grains having well separated grain boundaries. Further, the film growth and the roughness are analyzed by AFM. The root-mean-square (RMS) surface roughness is decreased from 13.52 nm (160 nm) to 9.34 nm (240 nm). The room temperature magnetization is measured with a maximum field of 10 kOe. The saturation magnetization is enhanced monotonically with an increase in thickness. The magnetic resonance linewidth is obtained in the range of 450 – 780 Oe. The dielectric response is measured in the frequency range of 104 – 106 Hz and in the temperature range of 303 – 473 K. With an increase in frequency, the dielectric constant and the loss tangent of all the samples decreased continuously, which is a typical behavior of conventional dielectric material. The real part of the dielectric constant and the dielectric loss is increased with an increase in thickness. The contribution of grain and grain boundaries is also analyzed by employing the equivalent circuit model. The highest dielectric constant is obtained for the film having a thickness of 240 nm at 104 Hz. The obtained results demonstrate that desired response can be obtained by tailoring the film thickness for the microwave magnetic devices.

Keywords: PLD, optical response, thin films, magnetic response, dielectric response

Procedia PDF Downloads 76
334 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 270
333 Structural Geology along the Jhakri-Wangtu Road (Jutogh Section) Himachal Pradesh, NW Higher Himalaya, India

Authors: Rajkumar Ghosh

Abstract:

The paper presents a comprehensive study of the structural analysis of the Chaura Thrust in Himachal Pradesh, India. The research focuses on several key aspects, including the activation timing of the Main Central Thrust (MCT) and the South Tibetan Detachment System (STDS), the identification and characterization of mylonitised zones through microscopic examination, and the understanding of box fold characteristics and their implications in the regional geology of the Himachal Himalaya. The primary objective of the study is to provide field documentation of the Chaura Thrust, which was previously considered a blind thrust with limited field evidence. Additionally, the research aims to characterize box folds and their signatures within the broader geological context of the Himachal Himalaya, document the temperature range associated with grain boundary migration (GBM), and explore the overprinting structures related to multiple sets of Higher Himalayan Out-of-Sequence Thrusts (OOSTs). The research methodology employed geological field observations and microscopic studies. Samples were collected along the Jhakri-Chaura transect at regular intervals of approximately 1 km to conduct strain analysis. Microstructural studies at the grain scale along the Jhakri-Wangtu transect were used to document the GBM-associated temperature range. The study reveals that the MCT activated in two parts, as did the STDS, and provides insights into the activation ages of the Main Boundary Thrust (MBT) and the Main Frontal Thrust (MFT). Under microscopic examination, the study identifies two mylonitised zones characterized by S-C fabric, and it documents dynamic and bulging recrystallization, as well as sub-grain formation. Various types of crenulated schistosity are observed in photomicrographs, including a rare occurrence where crenulation cleavage and sigmoid Muscovite are found juxtaposed. The study also notes the presence of S/SE-verging meso- and micro-scale box folds around Chaura, which may indicate structural upliftment. Kink folds near Chaura are visible, while asymmetric shear sense indicators in augen mylonite are predominantly observed under microscopic examination. Moreover, the research highlights the documentation of the Higher Himalayan Out-of-Sequence Thrust (OOST) in Himachal Pradesh, which activated the MCT and occurred within a zone south of the Main Central Thrust Upper (MCTU). The presence of multiple sets of OOSTs suggests a zigzag pattern of strain accumulation in the area. The study emphasizes the significance of understanding the overprinting structures associated with OOSTs. Overall, this study contributes to the understanding of the structural analysis of the Chaura Thrust and its implications in the regional geology of the Himachal Himalaya. The research underscores the importance of microscopic studies in identifying mylonitised zones and various types of crenulated schistosity. Additionally, the study documents the GBM-associated temperature range and provides insights into the activation of the Higher Himalayan Out-of-Sequence Thrust (OOST) in Himachal Pradesh. The findings of the study were obtained through geological field observations, microscopic studies, and strain analysis, offering valuable insights into the activation timing, mylonitization characteristics, and overprinting structures related to the Chaura Thrust and the broader tectonic framework of the region.

Keywords: Main Central Thrust, Jhakri Thrust, Chaura Thrust, Higher Himalaya, Out-of-Sequence Thrust, Sarahan Thrust

Procedia PDF Downloads 70
332 The Healthcare Costs of BMI-Defined Obesity among Adults Who Have Undergone a Medical Procedure in Alberta, Canada

Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach

Abstract:

Obesity is associated with significant personal impacts on health and has a substantial economic burden on payers due to increased healthcare use. A contemporary estimate of the healthcare costs associated with obesity at the population level are lacking. This evidence may provide further rationale for weight management strategies. Methods: Adults who underwent a medical procedure between 2012 and 2019 in Alberta, Canada were categorized into the investigational cohort (had body mass index [BMI]-defined class 2 or 3 obesity based on a procedure-associated code) and the control cohort (did not have the BMI procedure-associated code); those who had bariatric surgery were excluded. Characteristics were presented and healthcare costs ($CDN) determined over a 1-year observation period (2019/2020). Logistic regression and a generalized linear model with log link and gamma distribution were used to assess total healthcare costs (comprised of hospitalizations, emergency department visits, ambulatory care visits, physician visits, and outpatient prescription drugs); potential confounders included age, sex, region of residence, and whether the medical procedure was performed within 6-months before the observation period in the partial adjustment, and also the type of procedure performed, socioeconomic status, Charlson Comorbidity Index (CCI), and seven obesity-related health conditions in the full adjustment. Cost ratios and estimated cost differences with 95% confidence intervals (CI) were reported; incremental cost differences within the adjusted models represent referent cases. Results: The investigational cohort (n=220,190) was older (mean age: 53 standard deviation [SD]±17 vs 50 SD±17 years), had more females (71% vs 57%), lived in rural areas to a greater extent (20% vs 14%), experienced a higher overall burden of disease (CCI: 0.6 SD±1.3 vs 0.3 SD±0.9), and were less socioeconomically well-off (material/social deprivation was lower [14%/14%] in the most well-off quintile vs 20%/19%) compared with controls (n=1,955,548). Unadjusted total healthcare costs were estimated to be 1.77-times (95% CI: 1.76, 1.78) higher in the investigational versus control cohort; each healthcare resource contributed to the higher cost ratio. After adjusting for potential confounders, the total healthcare cost ratio decreased, but remained higher in the investigational versus control cohort (partial adjustment: 1.57 [95% CI: 1.57, 1.58]; full adjustment: 1.21 [95% CI: 1.20, 1.21]); each healthcare resource contributed to the higher cost ratio. Among urban-dwelling 50-year old females who previously had non-operative procedures, no procedures performed within 6-months before the observation period, a social deprivation index score of 3, a CCI score of 0.32, and no history of select obesity-related health conditions, the predicted cost difference between those living with and without obesity was $386 (95% CI: $376, $397). Conclusions: If these findings hold for the Canadian population, one would expect an estimated additional $3.0 billion per year in healthcare costs nationally related to BMI-defined obesity (based on an adult obesity rate of 26% and an estimated annual incremental cost of $386 [21%]); incremental costs are higher when obesity-related health conditions are not adjusted for. Results of this study provide additional rationale for investment in interventions that are effective in preventing and treating obesity and its complications.

Keywords: administrative data, body mass index-defined obesity, healthcare cost, real world evidence

Procedia PDF Downloads 80
331 A Normalized Non-Stationary Wavelet Based Analysis Approach for a Computer Assisted Classification of Laryngoscopic High-Speed Video Recordings

Authors: Mona K. Fehling, Jakob Unger, Dietmar J. Hecker, Bernhard Schick, Joerg Lohscheller

Abstract:

Voice disorders origin from disturbances of the vibration patterns of the two vocal folds located within the human larynx. Consequently, the visual examination of vocal fold vibrations is an integral part within the clinical diagnostic process. For an objective analysis of the vocal fold vibration patterns, the two-dimensional vocal fold dynamics are captured during sustained phonation using an endoscopic high-speed camera. In this work, we present an approach allowing a fully automatic analysis of the high-speed video data including a computerized classification of healthy and pathological voices. The approach bases on a wavelet-based analysis of so-called phonovibrograms (PVG), which are extracted from the high-speed videos and comprise the entire two-dimensional vibration pattern of each vocal fold individually. Using a principal component analysis (PCA) strategy a low-dimensional feature set is computed from each phonovibrogram. From the PCA-space clinically relevant measures can be derived that quantify objectively vibration abnormalities. In the first part of the work it will be shown that, using a machine learning approach, the derived measures are suitable to distinguish automatically between healthy and pathological voices. Within the approach the formation of the PCA-space and consequently the extracted quantitative measures depend on the clinical data, which were used to compute the principle components. Therefore, in the second part of the work we proposed a strategy to achieve a normalization of the PCA-space by registering the PCA-space to a coordinate system using a set of synthetically generated vibration patterns. The results show that owing to the normalization step potential ambiguousness of the parameter space can be eliminated. The normalization further allows a direct comparison of research results, which bases on PCA-spaces obtained from different clinical subjects.

Keywords: Wavelet-based analysis, Multiscale product, normalization, computer assisted classification, high-speed laryngoscopy, vocal fold analysis, phonovibrogram

Procedia PDF Downloads 238
330 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation

Authors: Kausar Harun, Ahmad Azmin Mohamad

Abstract:

Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.

Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles

Procedia PDF Downloads 284
329 Sexual Health And Male Fertility: Improving Sperm Health With Focus On Technology

Authors: Diana Peninger

Abstract:

Over 10% of couples in the U.S. have infertility problems, with roughly 40% traceable to the male partner. Yet, little attention has been given to improving men’s contribution to the conception process. One solution that is showing promise in increasing conception rates for IVF and other assisted reproductive technology treatments is a first-of-its-kind semen collection that has been engineered to mitigate sperm damage caused by traditional collection methods. Patients are able to collect semen at home and deliver to clinics within 48 hours for use in fertility analysis and treatment, with less stress and improved specimen viability. This abstract will share these findings along with expert insight and tips to help attendees understand the key role sperm collection plays in addressing and treating reproductive issues, while helping to improve patient outcomes and success. Our research was to determine if male reproductive outcomes can be increased by improving sperm specimen health with a focus on technology. We utilized a redesigned semen collection cup (patented as the Device for Improved Semen Collection/DISC—U.S. Patent 6864046 – known commercially as a ProteX) that met a series of physiological parameters. Previous research demonstrated significant improvement in semen perimeters (motility forward, progression, viability, and longevity) and overall sperm biochemistry when the DISC is used for collection. Animal studies have also shown dramatic increases in pregnancy rates. Our current study compares samples collected in the DISC, next-generation DISC (DISCng), and a standard specimen cup (SSC), dry, with the 1 mL measured amount of media and media in excess ( 5mL). Both human and animal testing will be included. With sperm counts declining at alarming rates due to environmental, lifestyle, and other health factors, accurate evaluations of sperm health are critical to understanding reproductive health, origins, and treatments of infertility. An increase in the health of the sperm as measured by extensive semen parameter analysis and improved semen parameters stable for 48 hours, expanding the processing time from 1 hour to 48 hours were also demonstrated.

Keywords: reprodutive, sperm, male, infertility

Procedia PDF Downloads 105
328 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 123
327 Optimizing Fermented Paper Production Using Spyrogira sp. Interpolating with Banana Pulp

Authors: Hadiatullah, T. S. D. Desak Ketut, A. A. Ayu, A. N. Isna, D. P. Ririn

Abstract:

Spirogyra sp. is genus of microalgae which has a high carbohydrate content that used as a best medium for bacterial fermentation to produce cellulose. This study objective to determine the effect of pulp banana in the fermented paper production process using Spirogyra sp. and characterizing of the paper product. The method includes the production of bacterial cellulose, assay of the effect fermented paper interpolating with banana pulp using Spirogyra sp., and the assay of paper characteristics include gram-mage paper, water assay absorption, thickness, power assay of tensile resistance, assay of tear resistance, density, and organoleptic assay. Experiments were carried out with completely randomized design with a variation of the concentration of sewage treatment in the fermented paper production interpolating banana pulp using Spirogyra sp. Each parameter data to be analyzed by Anova variance that continued by real difference test with an error rate of 5% using the SPSS. Nata production results indicate that different carbon sources (glucose and sugar) did not show any significant differences from cellulose parameters assay. Significantly different results only indicated for the control treatment. Although not significantly different from the addition of a carbon source, sugar showed higher potency to produce high cellulose. Based on characteristic assay of the fermented paper showed that the paper gram-mage indicated that the control treatment without interpolation of a carbon source and a banana pulp have better result than banana pulp interpolation. Results of control gram-mage is 260 gsm that show optimized by cardboard. While on paper gram-mage produced with the banana pulp interpolation is about 120-200 gsm that show optimized by magazine paper and art paper. Based on the density, weight, water absorption assays, and organoleptic assay of paper showing the highest results in the treatment of pulp banana interpolation with sugar source as carbon is 14.28 g/m2, 0.02 g and 0.041 g/cm2.minutes. The conclusion found that paper with nata material interpolating with sugar and banana pulp has the potential formulation to produce super-quality paper.

Keywords: cellulose, fermentation, grammage, paper, Spirogyra sp.

Procedia PDF Downloads 307
326 Medical Decision-Making in Advanced Dementia from the Family Caregiver Perspective: A Qualitative Study

Authors: Elzbieta Sikorska-Simmons

Abstract:

Advanced dementia is a progressive terminal brain disease that is accompanied by a syndrome of difficult to manage symptoms and complications that eventually lead to death. The management of advanced dementia poses major challenges to family caregivers who act as patient health care proxies in making medical treatment decisions. Little is known, however, about how they manage advanced dementia and how their treatment choices influence the quality of patient life. This prospective qualitative study examines the key medical treatment decisions that family caregivers make while managing advanced dementia. The term ‘family caregiver’ refers to a relative or a friend who is primarily responsible for managing patient’s medical care needs and legally authorized to give informed consent for medical treatments. Medical decision-making implies a process of choosing between treatment options in response to patient’s medical care needs (e.g., worsening comorbid conditions, pain, infections, acute medical events). Family caregivers engage in this process when they actively seek treatments or follow recommendations by healthcare professionals. Better understanding of medical decision-making from the family caregiver perspective is needed to design interventions that maximize the quality of patient life and limit inappropriate treatments. Data were collected in three waves of semi-structured interviews with 20 family caregivers for patients with advanced dementia. A purposive sample of 20 family caregivers was recruited from a senior care center in Central Florida. The qualitative personal interviews were conducted by the author in 4-5 months intervals. The ethical approval for the study was obtained prior to the data collection. Advanced dementia was operationalized as stage five or higher on the Global Deterioration Scale (GDS) (i.e., starting with the GDS score of five, patients are no longer able survive without assistance due to major cognitive and functional impairments). Information about patients’ GDS scores was obtained from the Center’s Medical Director, who had an in-depth knowledge of each patient’s health and medical treatment history. All interviews were audiotaped and transcribed verbatim. The qualitative data analysis was conducted to answer the following research questions: 1) what treatment decisions do family caregivers make while managing the symptoms of advanced dementia and 2) how do these treatment decisions influence the quality of patient life? To validate the results, the author asked each participating family caregiver if the summarized findings accurately captured his/her experiences. The identified medical decisions ranged from seeking specialist medical care to end-of-life care. The most common decisions were related to arranging medical appointments, medication management, seeking treatments for pain and other symptoms, nursing home placement, and accessing community-based healthcare services. The most challenging and consequential decisions were related to the management of acute complications, hospitalizations, and discontinuation of treatments. Decisions that had the greatest impact on the quality of patient life and survival were triggered by traumatic falls, worsening psychiatric symptoms, and aspiration pneumonia. The study findings have important implications for geriatric nurses in the context of patient/caregiver-centered dementia care. Innovative nursing approaches are needed to support family caregivers to effectively manage medical care needs of patients with advanced dementia.

Keywords: advanced dementia, family caregiver, medical decision-making, symptom management

Procedia PDF Downloads 100
325 Comparison of Yb and Tm-Fiber Laser Cutting Processes of Fiber Reinforced Plastics

Authors: Oktay Celenk, Ugur Karanfil, Iskender Demir, Samir Lamrini, Jorg Neumann, Arif Demir

Abstract:

Due to its favourable material characteristics, fiber reinforced plastics are amongst the main topics of all actual lightweight construction megatrends. Especially in transportation trends ranging from aeronautics over the automotive industry to naval transportation (yachts, cruise liners) the expected economic and environmental impact is huge. In naval transportation components like yacht bodies, antenna masts, decorative structures like deck lamps, light houses and pool areas represent cheap and robust solutions. Commercially available laser tools like carbon dioxide gas lasers (CO₂), frequency tripled solid state UV lasers, and Neodymium-YAG (Nd:YAG) lasers can be used. These tools have emission wavelengths of 10 µm, 0.355 µm, and 1.064 µm, respectively. The scientific goal is first of all the generation of a parameter matrix for laser processing of each used material for a Tm-fiber laser system (wavelength 2 µm). These parameters are the heat affected zone, process gas pressure, work piece feed velocity, intensity, irradiation time etc. The results are compared with results obtained with well-known material processing lasers, such as a Yb-fiber lasers (wavelength 1 µm). Compared to the CO₂-laser, the Tm-laser offers essential advantages for future laser processes like cutting, welding, ablating for repair and drilling in composite part manufacturing (components of cruise liners, marine pipelines). Some of these are the possibility of beam delivery in a standard fused silica fiber which enables hand guided processing, eye safety which results from the wavelength, excellent beam quality and brilliance due to the fiber nature. There is one more feature that is economically absolutely important for boat, automotive and military projects manufacturing that the wavelength of 2 µm is highly absorbed by the plastic matrix and thus enables selective removal of it for repair procedures.

Keywords: Thulium (Tm) fiber laser, laser processing of fiber-reinforced plastics (FRP), composite, heat affected zone

Procedia PDF Downloads 175
324 Analysis of Friction Stir Welding Process for Joining Aluminum Alloy

Authors: A. M. Khourshid, I. Sabry

Abstract:

Friction Stir Welding (FSW), a solid state joining technique, is widely being used for joining Al alloys for aerospace, marine automotive and many other applications of commercial importance. FSW were carried out using a vertical milling machine on Al 5083 alloy pipe. These pipe sections are relatively small in diameter, 5mm, and relatively thin walled, 2 mm. In this study, 5083 aluminum alloy pipe were welded as similar alloy joints using (FSW) process in order to investigate mechanical and microstructural properties .rotation speed 1400 r.p.m and weld speed 10,40,70 mm/min. In order to investigate the effect of welding speeds on mechanical properties, metallographic and mechanical tests were carried out on the welded areas. Vickers hardness profile and tensile tests of the joints as a metallurgical feasibility of friction stir welding for joining Al 6061 aluminum alloy welding was performed on pipe with different thickness 2, 3 and 4 mm,five rotational speeds (485,710,910,1120 and 1400) rpm and a traverse speed (4, 8 and 10)mm/min was applied. This work focuses on two methods such as artificial neural networks using software (pythia) and response surface methodology (RSM) to predict the tensile strength, the percentage of elongation and hardness of friction stir welded 6061 aluminum alloy. An artificial neural network (ANN) model was developed for the analysis of the friction stir welding parameters of 6061 pipe. The tensile strength, the percentage of elongation and hardness of weld joints were predicted by taking the parameters Tool rotation speed, material thickness and travel speed as a function. A comparison was made between measured and predicted data. Response surface methodology (RSM) also developed and the values obtained for the response Tensile strengths, the percentage of elongation and hardness are compared with measured values. The effect of FSW process parameter on mechanical properties of 6061 aluminum alloy has been analyzed in detail.

Keywords: friction stir welding (FSW), al alloys, mechanical properties, microstructure

Procedia PDF Downloads 430
323 Teachers’ Protective Factors of Resilience Scale: Factorial Structure, Validity and Reliability Issues

Authors: Athena Daniilidou, Maria Platsidou

Abstract:

Recently developed scales addressed -specifically- teachers’ resilience. Although they profited from the field, they do not include some of the critical protective factors of teachers’ resilience identified in the literature. To address this limitation, we aimed at designing a more comprehensive scale for measuring teachers' resilience which encompasses various personal and environmental protective factors. To this end, two studies were carried out. In Study 1, 407 primary school teachers were tested with the new scale, the Teachers’ Protective Factors of Resilience Scale (TPFRS). Similar scales, such as the Multidimensional Teachers’ Resilience Scale and the Teachers’ Resilience Scale), were used to test the convergent validity, while the Maslach Burnout Inventory and the Teachers’ Sense of Efficacy Scale was used to assess the discriminant validity of the new scale. The factorial structure of the TPFRS was checked with confirmatory factor analysis and a good fit of the model to the data was found. Next, item response theory analysis using a two-parameter model (2PL) was applied to check the items within each factor. It revealed that 9 items did not fit the corresponding factors well and they were removed. The final version of the TPFRS includes 29 items, which assess six protective factors of teachers’ resilience: values and beliefs (5 items, α=.88), emotional and behavioral adequacy (6 items, α=.74), physical well-being (3 items, α=.68), relationships within the school environment, (6 items, α=.73) relationships outside the school environment (5 items, α=.84), and the legislative framework of education (4 items, α=.83). Results show that it presents a satisfactory convergent and discriminant validity. Study 2, in which 964 primary and secondary school teachers were tested, confirmed the factorial structure of the TPFRS as well as its discriminant validity, which was tested with the Schutte Emotional Intelligence Scale-Short Form. In conclusion, our results confirmed that the TPFRS is a valid instrument for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession. In conclusion, our results showed that the TPFRS is a new multi-dimensional instrument valid for assessing teachers' protective factors of resilience and it can be safely used in future research and interventions in the teaching profession.

Keywords: resilience, protective factors, teachers, item response theory

Procedia PDF Downloads 59
322 Candida antartica Lipase Assisted Enrichment of n-3 PUFA in Indian Sardine Oil

Authors: Prasanna Belur, P. R. Ashwini, Sampath Charanyaa, I. Regupathi

Abstract:

Indian oil sardine (Sardinella longiceps) are one of the richest and cheapest sources of n-3 polyunsaturated fatty acids (n-3 PUFA) such as Eicosapentaenoic acid (EPA) and Docosahexaenoic acid (DHA). The health benefits conferred by n-3 PUFA upon consumption, in the prevention and treatment of coronary, neuromuscular, immunological disorders and allergic conditions are well documented. Natural refined Indian Sardine oil generally contain about 25% (w/w) n-3 PUFA along with various unsaturated and saturated fatty acids in the form of mono, di, and triglycerides. Having high concentration of n-3 PUFA content in the glyceride form is most desirable for human consumption to avail maximum health benefits. Thus, enhancing the n-3 PUFA content while retaining it in the glyceride form with green technology is the need of the hour. In this study, refined Indian Sardine oil was subjected to selective hydrolysis by Candida antartica lipase to enhance n-3 PUFA content. The degree of hydrolysis and enhancement of n-3 PUFA content was estimated by determining acid value, Iodine value, EPA and DHA content (by Gas Chromatographic methods after derivitization) before and after hydrolysis. Various reaction parameters such as pH, temperature, enzyme load, lipid to aqueous phase volume ratio and incubation time were optimized by conducting trials with one parameter at a time approach. Incubating enzyme solution with refined sardine oil with a volume ratio of 1:1, at pH 7.0, for 60 minutes at 50 °C, with an enzyme load of 60 mg/ml was found to be optimum. After enzymatic treatment, the oil was subjected to refining to remove free fatty acids and moisture content using previously optimized refining technology. Enzymatic treatment at the optimal conditions resulted in 12.11 % enhancement in Degree of hydrolysis. Iodine number had increased by 9.7 % and n-3 PUFA content was enhanced by 112 % (w/w). Selective enhancement of n-3 PUFA glycerides, eliminating saturated and unsaturated fatty acids from the oil using enzyme is an interesting preposition as this technique is environment-friendly, cost effective and provide natural source of n-3 PUFA rich oil.

Keywords: Candida antartica, lipase, n-3 polyunsaturated fatty acids, sardine oil

Procedia PDF Downloads 203
321 The Review and Contribution of Taiwan Government Policies on Environmental Impact Assessment to Water Recycling

Authors: Feng-Ming Fan, Xiu-Hui Wen, Po-Feng Chen, Yi-Ching Tu

Abstract:

Because of inborn natural conditions and man-made sabotage, the water resources insufficient phenomenon in Taiwan is a very important issue needed to face immediately. The regulations and law of water resources protection and recycling are gradually completed now but still lack of specific water recycling effectiveness checking method. The research focused on the industrial parks that already had been certificated with EIA to establish a professional checking system, carry through and forge ahead to contribute one’s bit in water resources sustainable usage. Taiwan Government Policies of Environmental Impact Assessment established in 1994, some development projects were requested to set certain water recycling ratio for water resources effective usage. The water covers and contains everything because all-inclusive companies enter and be stationed. For control the execution status of industrial park water and waste water recycling ratio about EIA commitment effectively, we invited experts and scholars in this filed to discuss with related organs to formulate the policy and audit plan. Besides, call a meeting to set public version water equilibrium diagrams and recycles parameter. We selected nine industrial parks that were requested set certain water recycling ratio in EIA examination stage and then according to the water usage quantity, we audited 340 factories in these industrial parks with spot and documents examination and got fruitful results – the average water usage of unit area per year of all these examined industrial parks is 31,000 tons/hectare/year, the value is just half of Taiwan industries average. It is obvious that the industrial parks with EIA commitment can decrease the water resources consumption effectively. Taiwan government policies of Environmental Impact Assessment took follow though tracking function into consideration at the beginning. The results of this research verify the importance of the implementing with water recycling to save water resources in EIA commitment. Inducing development units to follow EIA commitment to get the balance between environmental protection and economic development is one of the important EIA value.

Keywords: Taiwan government policies of environmental impact assessment, water recycling ratio of EIA commitment, water resources sustainable usage, water recycling

Procedia PDF Downloads 197
320 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 182
319 Effect of Non-Regulated pH on the Dynamics of Dark Fermentative Biohydrogen Production with Suspended and Immobilized Cell Culture

Authors: Joelle Penniston, E. B. Gueguim-Kana

Abstract:

Biohydrogen has been identified as a promising alternative to the use of non-renewable fossil reserves, owing to its sustainability and non-polluting nature. pH is considered as a key parameter in fermentative biohydrogen production processes, due to its effect on the hydrogenase activity, metabolic activity as well as substrate hydrolysis. The present study assesses the influence of regulating pH on dark fermentative biohydrogen production. Four experimental hydrogen production schemes were evaluated. Two were implemented using suspended cells under regulated pH growth conditions (Sus_R) and suspended and non-regulated pH (Sus_N). The two others regimes consisted of alginate immobilized cells under pH regulated growth conditions (Imm_R) and immobilized and non-pH regulated conditions (Imm_N). All experiments were carried out at 37.5°C with glucose as sole source of carbon. Sus_R showed a lag time of 5 hours and a peak hydrogen fraction of 36% and a glucose degradation of 37%, compared to Sus_N which showed a peak hydrogen fraction of 44% and complete glucose degradation. Both suspended culture systems showed a higher peak biohydrogen fraction compared to the immobilized cell system. Imm_R experiments showed a lag phase of 8 hours, a peak biohydrogen fraction of 35%, while Imm_N showed a lag phase of 5 hours, a peak biohydrogen fraction of 22%. 100% glucose degradation was observed in both pH regulated and non-regulated processes. This study showed that biohydrogen production in batch mode with suspended cells in a non-regulated pH environment results in a partial degradation of substrate, with lower yield. This scheme has been the culture mode of choice for most reported studies in biohydrogen research. The relatively lower slope in pH trend of the non-regulated pH experiment with immobilized cells (Imm_N) compared to Sus_N revealed that that immobilized systems have a better buffering capacity compared to suspended systems, which allows for the extended production of biohydrogen even under non-regulated pH conditions. However, alginate immobilized cultures in flask systems showed some drawbacks associated to high rate of gas production that leads to increased buoyancy of the immobilization beads. This ultimately impedes the release of gas out of the flask.

Keywords: biohydrogen, sustainability, suspended, immobilized

Procedia PDF Downloads 318
318 Computational System for the Monitoring Ecosystem of the Endangered White Fish (Chirostoma estor estor) in the Patzcuaro Lake, Mexico

Authors: Cesar Augusto Hoil Rosas, José Luis Vázquez Burgos, José Juan Carbajal Hernandez

Abstract:

White fish (Chirostoma estor estor) is an endemic species that habits in the Patzcuaro Lake, located in Michoacan, Mexico; being an important source of gastronomic and cultural wealth of the area. Actually, it have undergone an immense depopulation of individuals, due to the high fishing, contamination and eutrophication of the lake water, resulting in the possible extinction of this important species. This work proposes a new computational model for monitoring and assessment of critical environmental parameters of the white fish ecosystem. According to an Analytical Hierarchy Process, a mathematical model is built assigning weights to each environmental parameter depending on their water quality importance on the ecosystem. Then, a development of an advanced system for the monitoring, analysis and control of water quality is built using the virtual environment of LabVIEW. As results, we have obtained a global score that indicates the condition level of the water quality in the Chirostoma estor ecosystem (excellent, good, regular and poor), allowing to provide an effective decision making about the environmental parameters that affect the proper culture of the white fish such as temperature, pH and dissolved oxygen. In situ evaluations show regular conditions for a success reproduction and growth rates of this species where the water quality tends to have regular levels. This system emerges as a suitable tool for the water management, where future laws for white fish fishery regulations will result in the reduction of the mortality rate in the early stages of development of the species, which represent the most critical phase. This can guarantees better population sizes than those currently obtained in the aquiculture crop. The main benefit will be seen as a contribution to maintain the cultural and gastronomic wealth of the area and for its inhabitants, since white fish is an important food and economical income of the region, but the species is endangered.

Keywords: Chirostoma estor estor, computational system, lab view, white fish

Procedia PDF Downloads 296
317 Exergetic Optimization on Solid Oxide Fuel Cell Systems

Authors: George N. Prodromidis, Frank A. Coutelieris

Abstract:

Biogas can be currently considered as an alternative option for electricity production, mainly due to its high energy content (hydrocarbon-rich source), its renewable status and its relatively low utilization cost. Solid Oxide Fuel Cell (SOFC) stacks convert fuel’s chemical energy to electricity with high efficiencies and reveal significant advantages on fuel flexibility combined with lower emissions rate, especially when utilize biogas. Electricity production by biogas constitutes a composite problem which incorporates an extensive parametric analysis on numerous dynamic variables. The main scope of the presented study is to propose a detailed thermodynamic model on the optimization of SOFC-based power plants’ operation based on fundamental thermodynamics, energy and exergy balances. This model named THERMAS (THERmodynamic MAthematical Simulation model) incorporates each individual process, during electricity production, mathematically simulated for different case studies that represent real life operational conditions. Also, THERMAS offers the opportunity to choose a great variety of different values for each operational parameter individually, thus allowing for studies within unexplored and experimentally impossible operational ranges. Finally, THERMAS innovatively incorporates a specific criterion concluded by the extensive energy analysis to identify the most optimal scenario per simulated system in exergy terms. Therefore, several dynamical parameters as well as several biogas mixture compositions have been taken into account, to cover all the possible incidents. Towards the optimization process in terms of an innovative OPF (OPtimization Factor), presented here, this research study reveals that systems supplied by low methane fuels can be comparable to these supplied by pure methane. To conclude, such an innovative simulation model indicates a perspective on the optimal design of a SOFC stack based system, in the direction of the commercialization of systems utilizing biogas.

Keywords: biogas, exergy, efficiency, optimization

Procedia PDF Downloads 345
316 Study on Effectiveness of Strategies to Re-Establish Landscape Connectivity of Expressways with Reference to Southern Expressway Sri Lanka

Authors: N. G. I. Aroshana, S. Edirisooriya

Abstract:

Construction of highway is the most emerging development tendency in Sri Lanka. With these development activities, there are a lot of environmental and social issues started. Landscape fragmentation is one of the main issues that highly effect to the environment by the construction of expressways. Sri Lankan expressway system getting effort to treat fragmented landscape by using highway crossing structures. This paper designates, a highway post construction landscape study on the effectiveness of the landscape connectivity structures to restore connectivity. Geographic Information Systems (GIS), least cost path tool has been used in the selected two plots; 25km alone the expressway to identify animal crossing paths. Animal accident data use as measure for determining the most contributed plot for landscape connectivity. Number of patches, Mean patch size, Class area use as a parameter to determine the most effective land use class to reestablish the landscape connectivity. The findings of the research express scrub, grass and marsh were the most positively affected land use typologies for increase the landscape connectivity. It represents the growth increased by 8% within the 12 years of time. From the least cost analysis within the plot one, 28.5% of total animal crossing structures are within the high resistance land use classes. Southern expressway used reinforced compressed earth technologies for construction. It has been controlled the growth of the climax community. According to all findings, it could assume that involvement of the landscape crossing structures contributes to re-establish connectivity, but it is not enough to restore the majority of disturbance performed by the expressway. Connectivity measures used within the study can use as a tool for re-evaluate future involvement of highway crossing structures. Proper placement of the highway crossing structures leads to increase the rate of connectivity. The study recommends that monitoring the all stages (preconstruction, construction and post construction) of the project and preliminary design, and the involvement of the research applied connectivity assessment strategies helps to overcome the complication regarding the re-establishment of landscape connectivity using the highway crossing structures that facilitate the growth of flora and fauna.

Keywords: landscape fragmentation, least cost path, land use analysis, landscape connectivity structures

Procedia PDF Downloads 130
315 Development and Characterization of Cathode Materials for Sodium-Metal Chloride Batteries

Authors: C. D’Urso, L. Frusteri, M. Samperi, G. Leonardi

Abstract:

Solid metal halides are used as active cathode ingredients in the case of Na-NiCl2 batteries that require a fused secondary electrolyte, sodium tetrachloraluminate (NaAlCl4), to facilitate the movement of the Na+ ion into the cathode. The sodium-nickel chloride (Na - NiCl2) battery has been extensively investigated as a promising system for large-scale energy storage applications. The growth of Ni and NaCl particles in the cathodes is one of the most important factors that degrade the performance of the Na-NiCl2 battery. The larger the particles of active ingredients contained in the cathode, the smaller the active surface available for the electrochemical reaction. Therefore, the growth of Ni and NaCl particles can lead to an increase in cell polarization resulting from the reduced active area. A higher current density, a higher state of charge (SOC) at the end of the charge (EOC) and a lower Ni / NaCl ratio are the main parameters that result in the rapid growth of Ni particles. In light of these problems, cathode and chemistry Nano-materials with recognized and well-documented electrochemical functions have been studied and manufactured to simultaneously improve battery performance and develop less expensive and more performing, sustainable and environmentally friendly materials. Starting from the well-known cathodic material (Na-NiCl2), the new electrolytic materials have been prepared on the replacement of nickel with iron (10-90%substitution of Nichel with Iron), to obtain a new material with potential advantages compared to current battery technologies; for example,, (1) lower cost of cathode material compared to state of the art as well as (2) choices of cheaper materials (stainless steels could be used for cell components, including cathode current collectors and cell housings). The study on the particle size of the cathode and the physicochemical characterization of the cathode was carried out in the test cell using, where possible, the GITT method (galvanostatic technique of intermittent titration). Furthermore, the impact of temperature on the different cathode compositions of the positive electrode was studied. Especially the optimum operating temperature is an important parameter of the active material.

Keywords: critical raw materials, energy storage, sodium metal halide, battery

Procedia PDF Downloads 77
314 A Gendered Perspective of the Influence of Public Transport Infrastructural Design on Accessibility

Authors: Ajeni Ari, Chiara Maria Leva, Lorraine D’Arcy, Mary Kinahan

Abstract:

In addressing gender and transport, considerations of mobility disparities amongst users are important. Public transport (PT) policy and design do not efficiently account for the varied mobility practices between men and women, with literature only recently showing a movement towards gender inclusion in transport. Arrantly, transport policy and designs remain gender-blind to the variation of mobility needs. The global movement towards sustainability highlights the need for expeditious strategies that could mitigate biases within the existing system. At the forefront of such a plan of action, in part, may be mandated inclusive infrastructural designs that stimulate user engagement with the transport system. Fundamentally access requires a means or an opportunity for the entity, which for PT is an establishment of its physical environment and/or infrastructural design. Its practicality may be utilised with knowledge of shortcomings in tangible or intangible aspects of the service offerings allowing access to opportunities. To inform on existing biases in PT planning and design, this study analyses qualitative data to examine the opinions and lived experiences among transport users in Ireland. Findings show that infrastructural design plays a significant role in users’ engagement with the service. Paramount to accessibility are service provisions that cater to both user interactions and those of their dependents. Apprehension to use the service is more so evident in women in comparison to men, particularly while carrying out household duties and caring responsibilities at peak times or dark hours. Furthermore, limitations are apparent with infrastructural service offerings that do not accommodate the physical (dis)ability of users, especially universal design. There are intersecting factors that impinge on accessibility, e.g., safety and security, yet essentially; the infrastructural design is an important influencing parameter to user perceptual conditioning. Additionally, data discloses the need for user intricacies to be factored in transport planning geared towards gender inclusivity, including mobility practices, travel purpose, transit time or location, and system integration.

Keywords: infrastructure design, public transport, accessibility, women, gender

Procedia PDF Downloads 49
313 Impact of Fischer-Tropsch Wax on Ethylene Vinyl Acetate/Waste Crumb Rubber Modified Bitumen: An Energy-Sustainability Nexus

Authors: Keith D. Nare, Mohau J. Phiri, James Carson, Chris D. Woolard, Shanganyane P. Hlangothi

Abstract:

In an energy-intensive world, minimizing energy consumption is paramount to cost saving and reducing the carbon footprint. Improving mixture procedures utilizing warm mix additive Fischer-Tropsch (FT) wax in ethylene vinyl acetate (EVA) and modified bitumen highlights a greener and sustainable approach to modified bitumen. In this study, the impact of FT wax on optimized EVA/waste crumb rubber modified bitumen is assayed with a maximum loading of 2.5%. The rationale of the FT wax loading is to maintain the original maximum loading of EVA in the optimized mixture. The phase change abilities of FT wax enable EVA co-crystallization with the support of the elastomeric backbone of crumb rubber. Less than 1% loading of FT wax worked in the EVA/crumb rubber modified bitumen energy-sustainability nexus. Response surface methodology approach to the mixture design is implemented amongst the different loadings of FT wax, EVA for a consistent amount of crumb rubber and bitumen. Rheological parameters (complex shear modulus, phase angle and rutting parameter) were the factors used as performance indicators of the different optimized mixtures. The low temperature chemistry of the optimized mixtures is analyzed using elementary beam theory and the elastic-viscoelastic correspondence principle. Master curves and black space diagrams are developed and used to predict age-induced cracking of the different long term aged mixtures. Modified binder rheology reveals that the strain response is not linear and that there is substantial re-arrangement of polymer chains as stress is increased, this is based on the age state of the mixture and the FT wax and EVA loadings. Dominance of individual effects is evident over effects of synergy in co-interaction of EVA and FT wax. All-inclusive FT wax and EVA formulations were best optimized in mixture 4 with mixture 7 reflecting increase in ease of workability. Findings show that interaction chemistry of bitumen, crumb rubber EVA, and FT wax is first and second order in all cases involving individual contributions and co-interaction amongst the components of the mixture.

Keywords: bitumen, crumb rubber, ethylene vinyl acetate, FT wax

Procedia PDF Downloads 147
312 Statistical Modeling of Constituents in Ash Evolved From Pulverized Coal Combustion

Authors: Esam Jassim

Abstract:

Industries using conventional fossil fuels have an interest in better understanding the mechanism of particulate formation during combustion since such is responsible for emission of undesired inorganic elements that directly impact the atmospheric pollution level. Fine and ultrafine particulates have tendency to escape the flue gas cleaning devices to the atmosphere. They also preferentially collect on surfaces in power systems resulting in ascending in corrosion inclination, descending in the heat transfer thermal unit, and severe impact on human health. This adverseness manifests particularly in the regions of world where coal is the dominated source of energy for consumption. This study highlights the behavior of calcium transformation as mineral grains verses organically associated inorganic components during pulverized coal combustion. The influence of existing type of calcium on the coarse, fine and ultrafine mode formation mechanisms is also presented. The impact of two sub-bituminous coals on particle size and calcium composition evolution during combustion is to be assessed. Three mixed blends named Blends 1, 2, and 3 are selected according to the ration of coal A to coal B by weight. Calcium percentage in original coal increases as going from Blend 1 to 3. A mathematical model and a new approach of describing constituent distribution are proposed. Analysis of experiments of calcium distribution in ash is also modeled using Poisson distribution. A novel parameter, called elemental index λ, is introduced as a measuring factor of element distribution. Results show that calcium in ash that originally in coal as mineral grains has index of 17, whereas organically associated calcium transformed to fly ash shown to be best described when elemental index λ is 7. As an alkaline-earth element, calcium is considered the fundamental element responsible for boiler deficiency since it is the major player in the mechanism of ash slagging process. The mechanism of particle size distribution and mineral species of ash particles are presented using CCSEM and size-segregated ash characteristics. Conclusions are drawn from the analysis of pulverized coal ash generated from a utility-scale boiler.

Keywords: coal combustion, inorganic element, calcium evolution, fluid dynamics

Procedia PDF Downloads 309
311 Tool Wear of Metal Matrix Composite 10wt% AlN Reinforcement Using TiB2 Cutting Tool

Authors: M. S. Said, J. A. Ghani, C. H. Che Hassan, N. N. Wan, M. A. Selamat, R. Othman

Abstract:

Metal Matrix Composite (MMCs) have attracted considerable attention as a result of their ability to provide high strength, high modulus, high toughness, high impact properties, improved wear resistance and good corrosion resistance than unreinforced alloy. Aluminium Silicon (Al/Si) alloys Metal Matrix composite (MMC) has been widely used in various industrial sectors such as transportation, domestic equipment, aerospace, military, construction, etc. Aluminium silicon alloy is MMC reinforced with aluminium nitride (AlN) particle and becomes a new generation material for automotive and aerospace applications. The AlN material is one of the advanced materials with light weight, high strength, high hardness and stiffness qualities which have good future prospects. However, the high degree of ceramic particles reinforcement and the irregular nature of the particles along the matrix material that contribute to its low density, is the main problem that leads to the machining difficulties. This paper examines tool wear when milling AlSi/AlN Metal Matrix Composite using a TiB2 coated carbide cutting tool. The volume of the AlN reinforced particle was 10%. The milling process was carried out under dry cutting condition. The TiB2 coated carbide insert parameters used were the cutting speed of (230 m/min, feed rate 0.4mm tooth, DOC 0.5mm, 300 m/min, feed rate 0.8mm/tooth, DOC 0.5mm and 370 m/min, feed rate 0.8, DOC 0.4m). The Sometech SV-35 video microscope system was used for tool wear measurements respectively. The results have revealed that the tool life increases with the cutting speed (370 m/min, feed rate 0.8 mm/tooth and depth of cut 0.4mm) constituted the optimum condition for longer tool life which is 123.2 min. While at medium cutting speed, it is found that the cutting speed of 300m/min, feed rate 0.8 mm/tooth and depth of cut 0.5mm only 119.86 min for tool wear mean while the low cutting speed give 119.66 min. The high cutting speed gives the best parameter for cutting AlSi/AlN MMCs materials. The result will help manufacture to machining the AlSi/AlN MMCs materials.

Keywords: AlSi/AlN Metal Matrix Composite milling process, tool wear, TiB2 coated carbide tool, manufacturing engineering

Procedia PDF Downloads 403
310 Development of a Mechanical Ventilator Using A Manual Artificial Respiration Unit

Authors: Isomar Lima da Silva, Alcilene Batalha Pontes, Aristeu Jonatas Leite de Oliveira, Roberto Maia Augusto

Abstract:

Context: Mechanical ventilators are medical devices that help provide oxygen and ventilation to patients with respiratory difficulties. This equipment consists of a manual breathing unit that can be operated by a doctor or nurse and a mechanical ventilator that controls the airflow and pressure in the patient's respiratory system. This type of ventilator is commonly used in emergencies and intensive care units where it is necessary to provide breathing support to critically ill or injured patients. Objective: In this context, this work aims to develop a reliable and low-cost mechanical ventilator to meet the demand of hospitals in treating people affected by Covid-19 and other severe respiratory diseases, offering a chance of treatment as an alternative to mechanical ventilators currently available in the market. Method: The project presents the development of a low-cost auxiliary ventilator with a controlled ventilatory system assisted by integrated hardware and firmware for respiratory cycle control in non-invasive mechanical ventilation treatments using a manual artificial respiration unit. The hardware includes pressure sensors capable of identifying positive expiratory pressure, peak inspiratory flow, and injected air volume. The embedded system controls the data sent by the sensors. It ensures efficient patient breathing through the operation of the sensors, microcontroller, and actuator, providing patient data information to the healthcare professional (system operator) through the graphical interface and enabling clinical parameter adjustments as needed. Results: The test data of the developed mechanical ventilator presented satisfactory results in terms of performance and reliability, showing that the equipment developed can be a viable alternative to commercial mechanical ventilators currently available, offering a low-cost solution to meet the increasing demand for respiratory support equipment.

Keywords: mechanical fans, breathing, medical equipment, COVID-19, intensive care units

Procedia PDF Downloads 40
309 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 24
308 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.

Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control

Procedia PDF Downloads 138
307 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis

Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc

Abstract:

Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.

Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation

Procedia PDF Downloads 182