Search results for: machine learning tools and techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16924

Search results for: machine learning tools and techniques

1834 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength

Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph

Abstract:

Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.

Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage

Procedia PDF Downloads 236
1833 Remediation of Dye Contaminated Wastewater Using N, Pd Co-Doped TiO₂ Photocatalyst Derived from Polyamidoamine Dendrimer G1 as Template

Authors: Sarre Nzaba, Bulelwa Ntsendwana, Bekkie Mamba, Alex Kuvarega

Abstract:

The discharge of azo dyes such as Brilliant black (BB) into the water bodies has carcinogenic and mutagenic effects on humankind and the ecosystem. Conventional water treatment techniques fail to degrade these dyes completely thereby posing more problems. Advanced oxidation processes (AOPs) are promising technologies in solving the problem. Anatase type nitrogen-platinum (N, Pt) co-doped TiO₂ photocatalysts were prepared by a modified sol-gel method using amine terminated polyamidoamine generation 1 (PG1) as a template and source of nitrogen. The resultant photocatalysts were characterized by X‐ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), X‐ray photoelectron spectroscopy (XPS), UV‐Vis diffuse reflectance spectroscopy, photoluminescence spectroscopy (PL), Fourier transform infrared spectroscopy (FTIR), Raman spectroscopy (RS), thermal gravimetric analysis (TGA). The results showed that the calcination atmosphere played an important role in the morphology, crystal structure, spectral absorption, oxygen vacancy concentration, and visible light photocatalytic performance of the catalysts. Anatase phase particles ranging between 9- 20 nm were also confirmed by TEM, SEM, and analysis. The origin of the visible light photocatalytic activity was attributed to both the elemental N and Pd dopants and the existence of oxygen vacancies. Co-doping imparted a shift in the visible region of the solar spectrum. The visible light photocatalytic activity of the samples was investigated by monitoring the photocatalytic degradation of brilliant black dye. Co-doped TiO₂ showed greater photocatalytic brilliant black degradation efficiency compared to singly doped N-TiO₂ or Pd-TiO₂ under visible light irradiation. The highest reaction rate constant of 3.132 x 10-2 min⁻¹ was observed for N, Pd co-doped TiO₂ (2% Pd). The results demonstrated that the N, Pd co-doped TiO₂ (2% Pd) sample could completely degrade the dye in 3 h, while the commercial TiO₂ showed the lowest dye degradation efficiency (52.66%).

Keywords: brilliant black, Co-doped TiO₂, polyamidoamine generation 1 (PAMAM G1), photodegradation

Procedia PDF Downloads 178
1832 Hidden Truths of Advertising: An Unspoken Fact in Making Ethical Diffusions

Authors: Mustafa Hyder, Shamaila Burney, Roohi Mumtaz

Abstract:

The aim of this study is to determine the consequences of silent or hidden messages and their effectiveness in deteriorating or altering our ethical norms and values. The study also focuses the repercussions of subconscious messages and possibilities of ethical diffusion in our society. The research based on the question that what are the different factors that motivate advertisers to include subliminal messages and how much these unspoken truths affecting our ethical values silently. What are the causes and effects of the subliminal messages in general and the level of ethical diffusion and its acceptance? The concept of advertising is to promote and highlight the salient features of the products and services, a company offers. Advertising is the best option nowadays to convey the related information to the consumers so that they attracted more towards the products or services proposed. The other thing advertisers concentrate, is the psychological characteristics using to persuade consumers choice. Using skills and tactics of advertising to promote a product in such a way that it creates a sensation, controversy or brand consciousness among the consumers or customers. The purpose to have increase purchase or to gain popularity in comparison to their competitors, they sometimes use such tactics and techniques, which is highly unethical and immoral for any society. These kinds of stuff used very smartly within the ads that only the conscious mind subconsciously catches the meaning of those glittery images, posters, phrases, tag lines and non-verbal clues. This study elucidates the subliminal advertising their repercussions and impact on consumer’s behaviour in our society with the help of few ads embedded subliminally and the trends of profitability. The methods used to accomplish our research are based on qualitative research along with the research articles, books and feedback from focused groups regarding the topic. The basic objective of this study was that, there is no significant change in the behaviour and attitude observed. These messages capture very short-term life on the viewer’s subconscious mind but in long run people get used to it and hence not only have the diffusion power but also has the high level of acceptance as well that reflects mostly through their social behaviours and attitudes.

Keywords: ethical diffusion, subconscious, subliminal advertising, unspoken facts

Procedia PDF Downloads 328
1831 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 97
1830 Effect of Collection Technique of Blood on Clinical Pathology

Authors: Marwa Elkalla, E. Ali Abdelfadil, Ali. Mohamed. M. Sami, Ali M. Abdel-Monem

Abstract:

To assess the impact of the blood collection technique on clinical pathology markers and to establish reference intervals, a study was performed using normal, healthy C57BL/6 mice. Both sexes were employed, and they were randomly assigned to different groups depending on the phlebotomy technique used. The blood was drawn in one of four ways: intracardiac (IC), caudal vena cava (VC), caudal vena cava (VC) plus a peritoneal collection of any extravasated blood, or retroorbital phlebotomy (RO). Several serum biochemistries, such as a liver function test, a complete blood count with differentials, and a platelet count, were analysed from the blood and serum samples analysed. Red blood cell count, haemoglobin (p >0.002), hematocrit, alkaline phosphatase, albumin, total protein, and creatinine were all significantly greater in female mice. Platelet counts, specific white blood cell numbers (total, neutrophil, lymphocyte, and eosinophil counts), globulin, amylase, and the BUN/creatinine ratio were all greater in males. The VC approach seemed marginally superior to the IC approach for the characteristics under consideration and was linked to the least variation among both sexes. Transaminase levels showed the greatest variation between study groups. The aspartate aminotransferase (AST) values were linked with decreased fluctuation for the VC approach, but the alanine aminotransferase (ALT) values were similar between the IC and VC groups. There was a lot of diversity and range in transaminase levels between the MC and RO groups. We found that the RO approach, the only one tested that allowed for repeated sample collection, yielded acceptable ALT readings. The findings show that the test results are significantly affected by the phlebotomy technique and that the VC or IC techniques provide the most reliable data. When organising a study and comparing data to reference ranges, the ranges supplied here by collection method and sex can be utilised to determine the best approach to data collection. The authors suggest establishing norms based on the procedures used by each individual researcher in his or her own lab.

Keywords: clinical, pathology, blood, effect

Procedia PDF Downloads 96
1829 Processing and Characterization of Oxide Dispersion Strengthened (ODS) Fe-14Cr-3W-0.5Ti-0.3Y₂O₃ (14YWT) Ferritic Steel

Authors: Farha Mizana Shamsudin, Shahidan Radiman, Yusof Abdullah, Nasri Abdul Hamid

Abstract:

Oxide dispersion strengthened (ODS) ferritic steels are amongst the most promising candidates for large scale structural materials to be applied in next generation fission and fusion nuclear power reactors. This kind of material is relatively stable at high temperature, possess remarkable mechanical properties and comparatively good resistance from neutron radiation damage. The superior performance of ODS ferritic steels over their conventional properties is attributed to the high number density of nano-sized dispersoids that act as nucleation sites and stable sinks for many small helium bubbles resulting from irradiation, and also as pinning points to dislocation movement and grain growth. ODS ferritic steels are usually produced by powder metallurgical routes involving mechanical alloying (MA) process of Y2O3 and pre-alloyed or elemental metallic powders, and then consolidated by hot isostatic pressing (HIP) or hot extrusion (HE) techniques. In this study, Fe-14Cr-3W-0.5Ti-0.3Y₂O₃ (designated as 14YWT) was produced by mechanical alloying process and followed by hot isostatic pressing (HIP) technique. Crystal structure and morphology of this sample were identified and characterized by using X-ray Diffraction (XRD) and field emission scanning electron microscope (FESEM) respectively. The magnetic measurement of this sample at room temperature was carried out by using a vibrating sample magnetometer (VSM). FESEM micrograph revealed a homogeneous microstructure constituted by fine grains of less than 650 nm in size. The ultra-fine dispersoids of size between 5 nm to 19 nm were observed homogeneously distributed within the BCC matrix. The EDS mapping reveals that the dispersoids contain Y-Ti-O nanoclusters and from the magnetization curve plotted by VSM, this sample approaches the behavior of soft ferromagnetic materials. In conclusion, ODS Fe-14Cr-3W-0.5Ti-0.3Y₂O₃ (14YWT) ferritic steel was successfully produced by HIP technique in this present study.

Keywords: hot isostatic pressing, magnetization, microstructure, ODS ferritic steel

Procedia PDF Downloads 320
1828 Cryopreservation of Ring-Necked Pheasant (Phasianus colchicus) Semen for Establishing Cryobank

Authors: Rida Pervaiz, Bushra Allah Rakha, Muhammad Sajjad Ansari, Shamim Akhter, Kainat Waseem, Sumiyyah Zuha, Tooba Javed

Abstract:

Ring-necked pheasant (Phasianus colchicus) belongs to order Galliformes and family Phasianidae. It has been recognized as the most hunted bird due to its attractive colorful appearance and meat. Loss of habitat and hunting pressure has caused population fluctuations in the native range. Under these circumstances, this species can be conserved by employing ex-situ in vitro conservation techniques. Captive breeding, in combination with semen cryobanking is the most appropriate option to conserve/propagate this species without deteriorating the genetic diversity. Cryopreservation protocols of adequate efficiency are necessary to establish semen cryobanking for a species. Therefore, present study was designed to devise an efficient extender for cryopreservation of ring-necked pheasant semen. For this purpose, a range of extenders (Beltsville Poultry, red fowl, Lake, EK, Tselutin Poultry and Chicken semen extenders) were evaluated for cryopreservation of ring-necked pheasant semen. Semen collected from 10 cocks, diluted in the Beltsville Poultry (BPSE), Red Fowl (RFE), Lake (LE), EK (EKE), Tselutin Poultry (TPE) and Chicken Semen (CSE) extenders and cryopreserved. Glycerol (10%) was added to semen at 4°C, equilibrated for 10 min, filled in 0.5 mL French straws, kept over liquid nitrogen vapors for 10 min, cryopreserved in LN2 and stored. Sperm motility (%), viability (%), live/dead ratio (%), plasma membrane (%) and DNA Integrity (%) were evaluated at post-dilution, post-cooling, post-equilibration and post-thawing stage of cryopreservation. Sperm motility (83.8 ± 3.1; 81.3 ± 3.8; 73.8 ± 2.4; 62.5 ± 1.4), viability (79.0 ± 1.7; 75.5 ± 1.6; 69.5 ± 2.3; 65.5 ± 2.4), live/dead ratio (80.5 ± 5.7; 77.3 ± 4.9; 76.0 ± 2.7; 68.3 ± 2.3), plasma membrane (74.5 ± 2.9; 73.8 ± 3.4; 71.3 ± 2.3; 75.0 ± 3.4) and DNA integrity (78.3 ± 1.7; 73.0 ± 1.2; 68.0 ± 2.0; 63.0 ± 2.5) at all four stages of cryopreservation were recorded higher (P < 0.05) in red fowl extender compared to all experimental extenders. It is concluded that red fowl extender is the best extender for cryopreservation of ring-necked pheasant semen and can be used in establishing cryobank for ex situ conservation.

Keywords: ring-necked pheasant; extenders; cryopreservation; semen quality; DNA integrity

Procedia PDF Downloads 140
1827 Testing Depression in Awareness Space: A Proposal to Evaluate Whether a Psychotherapeutic Method Based on Spatial Cognition and Imagination Therapy Cures Moderate Depression

Authors: Lucas Derks, Christine Beenhakker, Michiel Brandt, Gert Arts, Ruud van Langeveld

Abstract:

Background: The method Depression in Awareness Space (DAS) is a psychotherapeutic intervention technique based on the principles of spatial cognition and imagination therapy with spatial components. The basic assumptions are: mental space is the primary organizing principle in the mind, and all psychological issues can be treated by first locating and by next relocating the conceptualizations involved. The most clinical experience was gathered over the last 20 years in the area of social issues (with the social panorama model). The latter work led to the conclusion that a mental object (image) gains emotional impact when it is placed more central, closer and higher in the visual field – and vice versa. Changing the locations of mental objects in space thus alters the (socio-) emotional meaning of the relationships. The experience of depression seems always associated with darkness. Psychologists tend to see the link between depression and darkness as a metaphor. However, clinical practice hints to the existence of more literal forms of darkness. Aims: The aim of the method Depression in Awareness Space is to reduce the distress of clients with depression in the clinical counseling practice, as a reliable alternative method of psychological therapy for the treatment of depression. The method Depression in Awareness Space aims at making dark areas smaller, lighter and more transparent in order to identify the problem or the cause of the depression which lies behind the darkness. It was hypothesized that the darkness is a subjective side-effect of the neurological process of repression. After reducing the dark clouds the real problem behind the depression becomes more visible, allowing the client to work on it and in that way reduce their feelings of depression. This makes repression of the issue obsolete. Results: Clients could easily get into their 'sadness' when asked to do so and finding the location of the dark zones proved pretty easy as well. In a recent pilot study with five participants with mild depressive symptoms (measured on two different scales and tested against an untreated control group with similar symptoms), the first results were also very promising. If the mental spatial approach to depression can be proven to be really effective, this would be very good news. The Society of Mental Space Psychology is now looking for sponsoring of an up scaled experiment. Conclusions: For spatial cognition and the research into spatial psychological phenomena, the discovery of dark areas can be a step forward. Beside out of pure scientific interest, it is great to know that this discovery has a clinical implication: when darkness can be connected to depression. Also, darkness seems to be more than metaphorical expression. Progress can be monitored over measurement tools that quantify the level of depressive symptoms and by reviewing the areas of darkness.

Keywords: depression, spatial cognition, spatial imagery, social panorama

Procedia PDF Downloads 169
1826 Synthesis and Characterization of Polycaprolactone for the Delivery of Rifampicin

Authors: Evelyn Osehontue Uroro, Richard Bright, Jing Yang Quek, Krasimir Vasilev

Abstract:

Bacterial infections have been a challenge both in the public and private sectors. The colonization of bacteria often occurs in medical devices such as catheters, heart valves, respirators, and orthopaedic implants. When biomedical devices are inserted into patients, the deposition of macromolecules such as fibrinogen and immunoglobin on their surfaces makes it easier for them to be prone to bacteria colonization leading to the formation of biofilms. The formation of biofilms on medical devices has led to a series of device-related infections which are usually difficult to eradicate and sometimes cause the death of patients. These infections require surgical replacements along with prolonged antibiotic therapy, which would incur additional health costs. It is, therefore, necessary to prevent device-related infections by inhibiting the formation of biofilms using intelligent technology. Antibiotic resistance of bacteria is also a major threat due to overuse. Different antimicrobial agents have been applied to microbial infections. They include conventional antibiotics like rifampicin. The use of conventional antibiotics like rifampicin has raised concerns as some have been found to have hepatic and nephrotoxic effects due to overuse. Hence, there is also a need for proper delivery of these antibiotics. Different techniques have been developed to encapsulate and slowly release antimicrobial agents, thus reducing host cytotoxicity. Examples of delivery systems are solid lipid nanoparticles, hydrogels, micelles, and polymeric nanoparticles. The different ways by which drugs are released from polymeric nanoparticles include diffusion-based release, elution-based release, and chemical/stimuli-responsive release. Polymeric nanoparticles have gained a lot of research interest as they are basically made from biodegradable polymers. An example of such a biodegradable polymer is polycaprolactone (PCL). PCL degrades slowly by hydrolysis but is often sensitive and responsive to stimuli like enzymes to release encapsulants for antimicrobial therapy. This study presents the synthesis of PCL nanoparticles loaded with rifampicin and the on-demand release of rifampicin for treating staphylococcus aureus infections.

Keywords: enzyme, Staphylococcus aureus, PCL, rifampicin

Procedia PDF Downloads 126
1825 The Effect of Speech-Shaped Noise and Speaker’s Voice Quality on First-Grade Children’s Speech Perception and Listening Comprehension

Authors: I. Schiller, D. Morsomme, A. Remacle

Abstract:

Children’s ability to process spoken language develops until the late teenage years. At school, where efficient spoken language processing is key to academic achievement, listening conditions are often unfavorable. High background noise and poor teacher’s voice represent typical sources of interference. It can be assumed that these factors particularly affect primary school children, because their language and literacy skills are still low. While it is generally accepted that background noise and impaired voice impede spoken language processing, there is an increasing need for analyzing impacts within specific linguistic areas. Against this background, the aim of the study was to investigate the effect of speech-shaped noise and imitated dysphonic voice on first-grade primary school children’s speech perception and sentence comprehension. Via headphones, 5 to 6-year-old children, recruited within the French-speaking community of Belgium, listened to and performed a minimal-pair discrimination task and a sentence-picture matching task. Stimuli were randomly presented according to four experimental conditions: (1) normal voice / no noise, (2) normal voice / noise, (3) impaired voice / no noise, and (4) impaired voice / noise. The primary outcome measure was task score. How did performance vary with respect to listening condition? Preliminary results will be presented with respect to speech perception and sentence comprehension and carefully interpreted in the light of past findings. This study helps to support our understanding of children’s language processing skills under adverse conditions. Results shall serve as a starting point for probing new measures to optimize children’s learning environment.

Keywords: impaired voice, sentence comprehension, speech perception, speech-shaped noise, spoken language processing

Procedia PDF Downloads 192
1824 Organ Dose Calculator for Fetus Undergoing Computed Tomography

Authors: Choonsik Lee, Les Folio

Abstract:

Pregnant patients may undergo CT in emergencies unrelated with pregnancy, and potential risk to the developing fetus is of concern. It is critical to accurately estimate fetal organ doses in CT scans. We developed a fetal organ dose calculation tool using pregnancy-specific computational phantoms combined with Monte Carlo radiation transport techniques. We adopted a series of pregnancy computational phantoms developed at the University of Florida at the gestational ages of 8, 10, 15, 20, 25, 30, 35, and 38 weeks (Maynard et al. 2011). More than 30 organs and tissues and 20 skeletal sites are defined in each fetus model. We calculated fetal organ dose-normalized by CTDIvol to derive organ dose conversion coefficients (mGy/mGy) for the eight fetuses for consequential slice locations ranging from the top to the bottom of the pregnancy phantoms with 1 cm slice thickness. Organ dose from helical scans was approximated by the summation of doses from multiple axial slices included in the given scan range of interest. We then compared dose conversion coefficients for major fetal organs in the abdominal-pelvis CT scan of pregnancy phantoms with the uterine dose of a non-pregnant adult female computational phantom. A comprehensive library of organ conversion coefficients was established for the eight developing fetuses undergoing CT. They were implemented into an in-house graphical user interface-based computer program for convenient estimation of fetal organ doses by inputting CT technical parameters as well as the age of the fetus. We found that the esophagus received the least dose, whereas the kidneys received the greatest dose in all fetuses in AP scans of the pregnancy phantoms. We also found that when the uterine dose of a non-pregnant adult female phantom is used as a surrogate for fetal organ doses, root-mean-square-error ranged from 0.08 mGy (8 weeks) to 0.38 mGy (38 weeks). The uterine dose was up to 1.7-fold greater than the esophagus dose of the 38-week fetus model. The calculation tool should be useful in cases requiring fetal organ dose in emergency CT scans as well as patient dose monitoring.

Keywords: computed tomography, fetal dose, pregnant women, radiation dose

Procedia PDF Downloads 141
1823 Study of the Montmorillonite Effect on PET/Clay and PEN/Clay Nanocomposites

Authors: F. Zouai, F. Z. Benabid, S. Bouhelal, D. Benachour

Abstract:

Nanocomposite polymer / clay are relatively important area of research. These reinforced plastics have attracted considerable attention in scientific and industrial fields because a very small amount of clay can significantly improve the properties of the polymer. The polymeric matrices used in this work are two saturated polyesters ie polyethylene terephthalate (PET) and polyethylene naphthalate (PEN).The success of processing compatible blends, based on poly(ethylene terephthalate) (PET)/ poly(ethylene naphthalene) (PEN)/clay nanocomposites in one step by reactive melt extrusion is described. Untreated clay was first purified and functionalized ‘in situ’ with a compound based on an organic peroxide/ sulfur mixture and (tetramethylthiuram disulfide) as the activator for sulfur. The PET and PEN materials were first separately mixed in the molten state with functionalized clay. The PET/4 wt% clay and PEN/7.5 wt% clay compositions showed total exfoliation. These compositions, denoted nPET and nPEN, respectively, were used to prepare new n(PET/PEN) nanoblends in the same mixing batch. The n(PET/PEN) nanoblends were compared to neat PET/PEN blends. The blends and nanocomposites were characterized using various techniques. Microstructural and nanostructural properties were investigated. Fourier transform infrared spectroscopy (FTIR) results showed that the exfoliation of tetrahedral clay nanolayers is complete and the octahedral structure totally disappears. It was shown that total exfoliation, confirmed by wide angle X-ray scattering (WAXS) measurements, contributes to the enhancement of impact strength and tensile modulus. In addition, WAXS results indicated that all samples are amorphous. The differential scanning calorimetry (DSC) study indicated the occurrence of one glass transition temperature Tg, one crystallization temperature Tc and one melting temperature Tm for every composition. This was evidence that both PET/PEN and nPET/nPEN blends are compatible in the entire range of compositions. In addition, the nPET/nPEN blends showed lower Tc and higher Tm values than the corresponding neat PET/PEN blends. In conclusion, the results obtained indicate that n(PET/PEN) blends are different from the pure ones in nanostructure and physical behavior.

Keywords: blends, exfoliation, DRX, DSC, montmorillonite, nanocomposites, PEN, PET, plastograph, reactive melt-mixing

Procedia PDF Downloads 298
1822 Assessing the Impact of Low Carbon Technology Integration on Electricity Distribution Networks: Advancing towards Local Area Energy Planning

Authors: Javier Sandoval Bustamante, Pardis Sheikhzadeh, Vijayanarasimha Hindupur Pakka

Abstract:

In the pursuit of achieving net-zero carbon emissions, the integration of low carbon technologies into electricity distribution networks is paramount. This paper delves into the critical assessment of how the integration of low carbon technologies, such as heat pumps, electric vehicle chargers, and photovoltaic systems, impacts the infrastructure and operation of electricity distribution networks. The study employs rigorous methodologies, including power flow analysis and headroom analysis, to evaluate the feasibility and implications of integrating these technologies into existing distribution systems. Furthermore, the research utilizes Local Area Energy Planning (LAEP) methodologies to guide local authorities and distribution network operators in formulating effective plans to meet regional and national decarbonization objectives. Geospatial analysis techniques, coupled with building physics and electric energy systems modeling, are employed to develop geographic datasets aimed at informing the deployment of low carbon technologies at the local level. Drawing upon insights from the Local Energy Net Zero Accelerator (LENZA) project, a comprehensive case study illustrates the practical application of these methodologies in assessing the rollout potential of LCTs. The findings not only shed light on the technical feasibility of integrating low carbon technologies but also provide valuable insights into the broader transition towards a sustainable and electrified energy future. This paper contributes to the advancement of knowledge in power electrical engineering by providing empirical evidence and methodologies to support the integration of low carbon technologies into electricity distribution networks. The insights gained are instrumental for policymakers, utility companies, and stakeholders involved in navigating the complex challenges of energy transition and achieving long-term sustainability goals.

Keywords: energy planning, energy systems, digital twins, power flow analysis, headroom analysis

Procedia PDF Downloads 58
1821 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 141
1820 A Controlled-Release Nanofertilizer Improves Tomato Growth and Minimizes Nitrogen Consumption

Authors: Mohamed I. D. Helal, Mohamed M. El-Mogy, Hassan A. Khater, Muhammad A. Fathy, Fatma E. Ibrahim, Yuncong C. Li, Zhaohui Tong, Karima F. Abdelgawad

Abstract:

Minimizing the consumption of agrochemicals, particularly nitrogen, is the ultimate goal for achieving sustainable agricultural production with low cost and high economic and environmental returns. The use of biopolymers instead of petroleum-based synthetic polymers for CRFs can significantly improve the sustainability of crop production since biopolymers are biodegradable and not harmful to soil quality. Lignin is one of the most abundant biopolymers that naturally exist. In this study, controlled-release fertilizers were developed using a biobased nanocomposite of lignin and bentonite clay mineral as a coating material for urea to increase nitrogen use efficiency. Five types of controlled-release urea (CRU) were prepared using two ratios of modified bentonite as well as techniques. The efficiency of the five controlled-release nano-urea (CRU) fertilizers in improving the growth of tomato plants was studied under field conditions. The CRU was applied to the tomato plants at three N levels representing 100, 50, and 25% of the recommended dose of conventional urea. The results showed that all CRU treatments at the three N levels significantly enhanced plant growth parameters, including plant height, number of leaves, fresh weight, and dry weight, compared to the control. Additionally, most CRU fertilizers increased total yield and fruit characteristics (weight, length, and diameter) compared to the control. Additionally, marketable yield was improved by CRU fertilizers. Fruit firmness and acidity of CRU treatments at 25 and 50% N levels were much higher than both the 100% CRU treatment and the control. The vitamin C values of all CRU treatments were lower than the control. Nitrogen uptake efficiencies (NUpE) of CRU treatments were 47–88%, which is significantly higher than that of the control (33%). In conclusion, all CRU treatments at an N level of 25% of the recommended dose showed better plant growth, yield, and fruit quality of tomatoes than the conventional fertilizer.

Keywords: nitrogen use efficiency, quality, urea, nano particles, ecofriendly

Procedia PDF Downloads 76
1819 Reducing System Delay to Definitive Care For STEMI Patients, a Simulation of Two Different Strategies in the Brugge Area, Belgium

Authors: E. Steen, B. Dewulf, N. Müller, C. Vandycke, Y. Vandekerckhove

Abstract:

Introduction: The care for a ST-elevation myocardial infarction (STEMI) patient is time-critical. Reperfusion therapy within 90 minutes of initial medical contact is mandatory in the improvement of the outcome. Primary percutaneous coronary intervention (PCI) without previous fibrinolytic treatment, is the preferred reperfusion strategy in patients with STEMI, provided it can be performed within guideline-mandated times. Aim of the study: During a one year period (January 2013 to December 2013) the files of all consecutive STEMI patients with urgent referral from non-PCI facilities for primary PCI were reviewed. Special attention was given to a subgroup of patients with prior out-of-hospital medical contact generated by the 112-system. In an effort to reduce out-of-hospital system delay to definitive care a change in pre-hospital 112 dispatch strategies is proposed for these time-critical patients. Actual time recordings were compared with travel time simulations for two suggested scenarios. A first scenario (SC1) involves the decision by the on scene ground EMS (GEMS) team to transport the out-of-hospital diagnosed STEMI patient straight forward to a PCI centre bypassing the nearest non-PCI hospital. Another strategy (SC2) explored the potential role of helicopter EMS (HEMS) where the on scene GEMS team requests a PCI-centre based HEMS team for immediate medical transfer to the PCI centre. Methods and Results: 49 (29,1% of all) STEMI patients were referred to our hospital for emergency PCI by a non-PCI facility. 1 file was excluded because of insufficient data collection. Within this analysed group of 48 secondary referrals 21 patients had an out-of-hospital medical contact generated by the 112-system. The other 27 patients presented at the referring emergency department without prior contact with the 112-system. The table below shows the actual time data from first medical contact to definitive care as well as the simulated possible gain of time for both suggested strategies. The PCI-team was always alarmed upon departure from the referring centre excluding further in-hospital delay. Time simulation tools were similar to those used by the 112-dispatch centre. Conclusion: Our data analysis confirms prolonged reperfusion times in case of secondary emergency referrals for STEMI patients even with the use of HEMS. In our setting there was no statistical difference in gain of time between the two suggested strategies, both reducing the secondary referral generated delay with about one hour and by this offering all patients PCI within the guidelines mandated time. However, immediate HEMS activation by the on scene ground EMS team for transport purposes is preferred. This ensures a faster availability of the local GEMS-team for its community. In case these options are not available and the guideline-mandated times for primary PCI are expected to be exceeded, primary fibrinolysis should be considered in a non-PCI centre.

Keywords: STEMI, system delay, HEMS, emergency medicine

Procedia PDF Downloads 319
1818 Evaluation of Pragmatic Information in an English Textbook: Focus on Requests

Authors: Israa A. Qari

Abstract:

Learning to request in a foreign language is a key ability within pragmatics language teaching. This paper examines how requests are taught in English Unlimited Book 3 (Cambridge University Press), an EFL textbook series employed by King Abdulaziz University in Jeddah, Saudi Arabia to teach advanced foundation year students English. The focus of analysis is the evaluation of the request linguistic strategies present in the textbook, frequency of the use of these strategies, and the contextual information provided on the use of these linguistic forms. The researcher collected all the linguistic forms which consisted of the request speech act and divided them into levels employing the CCSARP request coding manual. Findings demonstrated that simple and commonly employed request strategies are introduced. Looking closely at the exercises throughout the chapters, it was noticeable that the book exclusively employed the most direct form of requesting (the imperative) when giving learners instructions: e.g. listen, write, ask, answer, read, look, complete, choose, talk, think, etc. The book also made use of some other request strategies such as ‘hedged performatives’ and ‘query preparatory’. However, it was also found that many strategies were not dealt with in the book, specifically strategies with combined functions (e.g. possibility, ability). On a sociopragmatic level, a strong focus was found to exist on standard situations in which relations between the requester and requestee are clear. In general, contextual information was communicated implicitly only. The textbook did not seem to differentiate between formal and informal request contexts (register) which might consequently impel students to overgeneralize. The paper closes with some recommendations for textbook and curriculum designers. Findings are also contrasted with previous results from similar body of research on EFL requests.

Keywords: EFL, requests, saudi, speech acts, textbook evaluation

Procedia PDF Downloads 135
1817 An Experiment Research on the Effect of Brain-Break in the Classroom on Elementary School Students’ Selective Attention

Authors: Hui Liu, Xiaozan Wang, Jiarong Zhong, Ziming Shao

Abstract:

Introduction: Related research shows that students don’t concentrate on teacher’s speaking in the classroom. The d2 attention test is a time-limited test about selective attention. The d2 attention test can be used to evaluate individual selective attention. Purpose: To use the d2 attention test tool to measure the difference between the attention level of the experimental class and the control class before and after Brain-Break and to explore the effect of Brain-Break in the classroom on students' selective attention. Methods: According to the principle of no difference in pre-test data, two classes in the fourth- grade of Shenzhen Longhua Central Primary School were selected. After 20 minutes of class in the third class in the morning and the third class in the afternoon, about 3-minute Brain-Break intervention was performed in the experimental class for 10 weeks. The normal class in the control class did not intervene. Before and after the experiment, the d2 attention test tool was used to test the attention level of the two-class students. The paired sample t-test and independent sample t-test in SPSS 23.0 was used to test the change in the attention level of the two-class classes around 10 weeks. This article only presents results with significant differences. Results: The independent sample t-test results showed that after ten-week of Brain-Break, the missed errors (E1 t = -2.165 p = 0.042), concentration performance (CP t = 1.866 p = 0.05), and the degree of omissions (Epercent t = -2.375 p = 0.029) in experimental class showed significant differences compared with control class. The students’ error level decreased and the concentration increased. Conclusions: Adding Brain-Break interventions in the classroom can effectively improve the attention level of fourth-grade primary school students to a certain extent, especially can improve the concentration of attention and decrease the error rate in the tasks. The new sport's learning model is worth promoting

Keywords: cultural class, micromotor, attention, D2 test

Procedia PDF Downloads 132
1816 Experimental Investigation of Nano-Enhanced-PCM-Based Heat Sinks for Passive Thermal Management of Small Satellites

Authors: Billy Moore, Izaiah Smith, Dominic Mckinney, Andrew Cisco, Mehdi Kabir

Abstract:

Phase-change materials (PCMs) are considered one of the most promising substances to be engaged passively in thermal management and storage systems for spacecraft, where it is critical to diminish the overall mass of the onboard thermal storage system while minimizing temperature fluctuations upon drastic changes in the environmental temperature within the orbit stage. This makes the development of effective thermal management systems more challenging since there is no atmosphere in outer space to take advantage of natural and forced convective heat transfer. PCM can store or release a tremendous amount of thermal energy within a small volume in the form of latent heat of fusion in the phase-change processes of melting and solidification from solid to liquid or, conversely, during which temperature remains almost constant. However, the existing PCMs pose very low thermal conductivity, leading to an undesirable increase in total thermal resistance and, consequently, a slow thermal response time. This often turns into a system bottleneck from the thermal performance perspective. To address the above-mentioned drawback, the present study aims to design and develop various heat sinks featured by nano-structured graphitic foams (i.e., carbon foam), expanded graphite (EG), and open-cell copper foam (OCCF) infiltrated with a conventional paraffin wax PCM with a melting temperature of around 35 °C. This study focuses on the use of passive thermal management techniques to develop efficient heat sinks to maintain the electronics circuits’ and battery module’s temperature within the thermal safety limit for small spacecraft and satellites such as the Pumpkin and OPTIMUS battery modules designed for CubeSats with a cross-sectional area of approximately 4˝×4˝. Thermal response times for various heat sinks are assessed in a vacuum chamber to simulate space conditions.

Keywords: heat sink, porous foams, phase-change material (PCM), spacecraft thermal management

Procedia PDF Downloads 15
1815 Geomorphology and Flood Analysis Using Light Detection and Ranging

Authors: George R. Puno, Eric N. Bruno

Abstract:

The natural landscape of the Philippine archipelago plus the current realities of climate change make the country vulnerable to flood hazards. Flooding becomes the recurring natural disaster in the country resulting to lose of lives and properties. Musimusi is among the rivers which exhibited inundation particularly at the inhabited floodplain portion of its watershed. During the event, rescue operations and distribution of relief goods become a problem due to lack of high resolution flood maps to aid local government unit identify the most affected areas. In the attempt of minimizing impact of flooding, hydrologic modelling with high resolution mapping is becoming more challenging and important. This study focused on the analysis of flood extent as a function of different geomorphologic characteristics of Musimusi watershed. The methods include the delineation of morphometric parameters in the Musimusi watershed using Geographic Information System (GIS) and geometric calculations tools. Digital Terrain Model (DTM) as one of the derivatives of Light Detection and Ranging (LiDAR) technology was used to determine the extent of river inundation involving the application of Hydrologic Engineering Center-River Analysis System (HEC-RAS) and Hydrology Modelling System (HEC-HMS) models. The digital elevation model (DEM) from synthetic Aperture Radar (SAR) was used to delineate watershed boundary and river network. Datasets like mean sea level, river cross section, river stage, discharge and rainfall were also used as input parameters. Curve number (CN), vegetation, and soil properties were calibrated based on the existing condition of the site. Results showed that the drainage density value of the watershed is low which indicates that the basin is highly permeable subsoil and thick vegetative cover. The watershed’s elongation ratio value of 0.9 implies that the floodplain portion of the watershed is susceptible to flooding. The bifurcation ratio value of 2.1 indicates higher risk of flooding in localized areas of the watershed. The circularity ratio value (1.20) indicates that the basin is circular in shape, high discharge of runoff and low permeability of the subsoil condition. The heavy rainfall of 167 mm brought by Typhoon Seniang last December 29, 2014 was characterized as high intensity and long duration, with a return period of 100 years produced 316 m3s-1 outflows. Portion of the floodplain zone (1.52%) suffered inundation with 2.76 m depth at the maximum. The information generated in this study is helpful to the local disaster risk reduction management council in monitoring the affected sites for more appropriate decisions so that cost of rescue operations and relief goods distribution is minimized.

Keywords: flooding, geomorphology, mapping, watershed

Procedia PDF Downloads 230
1814 Understanding the Interactive Nature in Auditory Recognition of Phonological/Grammatical/Semantic Errors at the Sentence Level: An Investigation Based upon Japanese EFL Learners’ Self-Evaluation and Actual Language Performance

Authors: Hirokatsu Kawashima

Abstract:

One important element of teaching/learning listening is intensive listening such as listening for precise sounds, words, grammatical, and semantic units. Several classroom-based investigations have been conducted to explore the usefulness of auditory recognition of phonological, grammatical and semantic errors in such a context. The current study reports the results of one such investigation, which targeted auditory recognition of phonological, grammatical, and semantic errors at the sentence level. 56 Japanese EFL learners participated in this investigation, in which their recognition performance of phonological, grammatical and semantic errors was measured on a 9-point scale by learners’ self-evaluation from the perspective of 1) two types of similar English sound (vowel and consonant minimal pair words), 2) two types of sentence word order (verb phrase-based and noun phrase-based word orders), and 3) two types of semantic consistency (verb-purpose and verb-place agreements), respectively, and their general listening proficiency was examined using standardized tests. A number of findings have been made about the interactive relationships between the three types of auditory error recognition and general listening proficiency. Analyses based on the OPLS (Orthogonal Projections to Latent Structure) regression model have disclosed, for example, that the three types of auditory error recognition are linked in a non-linear way: the highest explanatory power for general listening proficiency may be attained when quadratic interactions between auditory recognition of errors related to vowel minimal pair words and that of errors related to noun phrase-based word order are embraced (R2=.33, p=.01).

Keywords: auditory error recognition, intensive listening, interaction, investigation

Procedia PDF Downloads 513
1813 Identifying the Factors that Influence Water-Use Efficiency in Agriculture: Case Study in a Spanish Semi-Arid Region

Authors: Laura Piedra-Muñoz, Ángeles Godoy-Durán, Emilio Galdeano-Gómez, Juan C. Pérez-Mesa

Abstract:

The current agricultural system in some arid and semi-arid areas is not sustainable in the long term. In southeast Spain, groundwater is the main water source and is overexploited, while alternatives like desalination are still limited. The Water Plan for the Mediterranean Basins 2015-2020 indicates a global deficit of 73.42 hm3 and an overexploitation of the aquifers of 205.58hm3. In order to solve this serious problem, two major actions can be taken: increasing available water, and/or improving the efficiency of its use. This study focuses on the latter. The main aim of this study is to present the major factors related to water usage efficiency in farming. It focuses on Almería province, southeast Spain, one of the most arid areas of the country, and in particular on family farms as the main direct managers of water use in this zone. Many of these farms are among the most water efficient in Spanish agriculture, but this efficiency is not generalized throughout the sector. This work conducts a comprehensive assessment of water performance in this area, using on-farm water-use, structural, socio-economic and environmental information. Two statistical techniques are used: descriptive analysis and cluster analysis. Thus, two groups are identified: the least and the most efficient farms regarding water usage. By analyzing both the common characteristics within each group and the differences between the groups with a one-way ANOVA analysis, several conclusions can be reached. The main differences between the two clusters center on the extent to which innovation and new technologies are used in irrigation. The most water efficient farms are characterized by more educated farmers, a greater degree of innovation, new irrigation technology, specialized production and awareness of water issues and environmental sustainability. The research shows that better practices and policies can have a substantial impact on achieving a more sustainable and efficient use of water. The findings of this study can be extended to farms in similar arid and semi-arid areas and contribute to foster appropriate policies to improve the efficiency of water usage in the agricultural sector.

Keywords: cluster analysis, family farms, Spain, water-use efficiency

Procedia PDF Downloads 288
1812 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications

Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso

Abstract:

The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.

Keywords: interferometry, MIMO RADAR, SAR, tomography

Procedia PDF Downloads 195
1811 Development of Intervention Policy Options for Sustainable Fisheries Management of Lake Hawassa, Ethiopia

Authors: Mekonen Hailu, Gashaw Tesfaye, Adamneh Dagne, Hiwot Teshome

Abstract:

Lake Hawassa is one of the most important lakes for Ethiopian fishery. It serves as a source of food and nutrition, income and livelihood for many inhabitants. However, the fishery in Lake Hawassa shows a declining trend, especially for the most valuable species, such as the Nile tilapia (Oreochromis niloticus L.), indicating that the existing management systems are either not fully enforced or inadequate. The aim of this study was therefore to develop management policy options for the sustainable utilization and management of fishery resources in Lake Hawassa. A blend of primary and secondary data was used for the study. Primary data were collected using Participatory Rural Appraisal (PRA) techniques such as focus group discussions with members of fishing co-operatives, co-operative leaders and key informant discussion to understand the current state of the fisheries resources. Then literatures were reviewed to obtain secondary data and develop alternative management policy options. It has been realized that Lake Hawassa is not very species-rich in terms of fish diversity. It contains only six species belonging to four families, of which only three are commercially important, including the Nile tilapia (90 % of catches), the African catfish Clarias gariepinus B. (7 % of catches) and the African large barb Labeobarbus intermedius R. (only 3 % of catches). The production has been declining since 2007. The top six challenges that could be responsible for this decline, identified by about two-thirds of respondents and supported by the literature review, are directly linked to fisheries and fisheries management, with overfishing, irregular monitoring, control, and surveillance (MCS) system and the lack of a fishing licensing system ranking first, second and third respectively. It is, therefore, important to address these and other problems identified in the study. Of the management options analyzed, we suggest adapting the management approach to sustain the fishery in Lake Hawaasa and its socio-economic benefits. We also present important conditions for successfully implementing co-management in this and other lakes in Ethiopia.

Keywords: comanagement, community-based management, fishery, overfishing, participatory approach, top-down management

Procedia PDF Downloads 10
1810 The Cost of Healthcare among Malaysian Community-Dwelling Elderly with Dementia

Authors: Roshanim Koris, Norashidah Mohamed Nor, Sharifah Azizah Haron, Normaz Wana Ismail, Syed Mohamed Aljunid Syed Junid, Amrizal Muhammad Nur, Asrul Akmal Shafie, Suraya Yusuff, Namaitijiang Maimaiti

Abstract:

An ageing population has huge implications for virtually every aspect of Malaysian societies. The elderly consume a greater volume of healthcare facilities not because they are older, but because of they are sick. The chronic comorbidities and deterioration of cognitive ability would lead the elderly’s health to become worst. This study aims to provide a comprehensive estimate of the direct and indirect costs of health care used in a nationally representative sample of community-dwelling elderly with dementia and as well as the determinants of healthcare cost. A survey using multi-stage random sampling techniques recruited a final sample of 2274 elderly people (60 years and above) in the state of Johor, Perak, Selangor and Kelantan. Mini Mental State Examination (MMSE) score was used to measure the cognitive capability among the elderly. Only the elderly with a score less than 19 marks were selected for further analysis and were classified as dementia. By using a two-part model findings also indicate household income and education level are variables that strongly significantly influence the healthcare cost among elderly with dementia. A number of visits and admission are also significantly affect healthcare expenditure. The comorbidity that highly influences healthcare cost is cancer and seeking the treatment in private facilities is also significantly affected the healthcare cost among the demented elderly. The level of dementia severity is not significant in determining the cost. This study is expected to attract the government's attention and act as a wake-up call for them to be more concerned about the elderly who are at high risk of having chronic comorbidities and cognitive problems by providing more appropriate health and social care facilities. The comorbidities are one of the factor that could cause dementia among elderly. It is hoped that this study will promote the issues of dementia as a priority in public health and social care in Malaysia.

Keywords: ageing population, dementia, elderly, healthcare cost, healthcare utiliztion

Procedia PDF Downloads 206
1809 Aerodynamic Design Optimization Technique for a Tube Capsule That Uses an Axial Flow Air Compressor and an Aerostatic Bearing

Authors: Ahmed E. Hodaib, Muhammed A. Hashem

Abstract:

High-speed transportation has become a growing concern. To increase high-speed efficiencies and minimize power consumption of a vehicle, we need to eliminate the friction with the ground and minimize the aerodynamic drag acting on the vehicle. Due to the complexity and high power requirements of electromagnetic levitation, we make use of the air in front of the capsule, that produces the majority of the drag, to compress it in two phases and inject a proportion of it through small nozzles to make a high-pressure air cushion to levitate the capsule. The tube is partially-evacuated so that the air pressure is optimized for maximum compressor effectiveness, optimum tube size, and minimum vacuum pump power consumption. The total relative mass flow rate of the tube air is divided into two fractions. One is by-passed to flow over the capsule body, ensuring that no chocked flow takes place. The other fraction is sucked by the compressor where it is diffused to decrease the Mach number (around 0.8) to be suitable for the compressor inlet. The air is then compressed and intercooled, then split. One fraction is expanded through a tail nozzle to contribute to generating thrust. The other is compressed again. Bleed from the two compressors is used to maintain a constant air pressure in an air tank. The air tank is used to supply air for levitation. Dividing the total mass flow rate increases the achievable speed (Kantrowitz limit), and compressing it decreases the blockage of the capsule. As a result, the aerodynamic drag on the capsule decreases. As the tube pressure decreases, the drag decreases and the capsule power requirements decrease, however, the vacuum pump consumes more power. That’s why Design optimization techniques are to be used to get the optimum values for all the design variables given specific design inputs. Aerodynamic shape optimization, Capsule and tube sizing, compressor design, diffuser and nozzle expander design and the effect of the air bearing on the aerodynamics of the capsule are to be considered. The variations of the variables are to be studied for the change of the capsule velocity and air pressure.

Keywords: tube-capsule, hyperloop, aerodynamic design optimization, air compressor, air bearing

Procedia PDF Downloads 330
1808 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 378
1807 Slope Stabilisation of Highly Fractured Geological Strata Consisting of Mica Schist Layers While Construction of Tunnel Shaft

Authors: Saurabh Sharma

Abstract:

Introduction: The case study deals with the ground stabilisation of Nabi Karim Metro Station in Delhi, India, wherein an extremely complex geology was encountered while excavating the tunnelling shaft for launching Tunnel Boring Machine. The borelog investigation and the Seismic Refraction Technique (SRT) indicated towards the presence of an extremely hard rocky mass from a depth of 3-4 m itself, and accordingly, the Geotechnical Interpretation Report (GIR) concluded the presence of Grade-IV rock from 3m onwards and presence of Grade-III and better rock from 5-6m onwards. Accordingly, it was planned to retain the ground by providing secant piles all around the launching shaft and then excavating the shaft vertically after leaving a berm of 1.5m to prevent secant piles from getting exposed. To retain the side slopes, rock bolting with shotcreting and wire meshing were proposed, which is a normal practice in such strata. However, with the increase in depth of excavation, the rock quality kept on decreasing at an unexpected and surprising pace, with the Grade-III rock mass at 5-6 m converting to conglomerate formation at the depth of 15m. This worsening of geology from high grade rock to slushy conglomerate formation can never be predicted and came as a surprise to even the best geotechnical engineers. Since the excavation had already been cut down vertically to manage the shaft size, the execution was continued with enhanced cautions to stabilise the side slopes. But, when the shaft work was about to finish, a collapse was encountered on one side of the excavation shaft. This collapse was unexpected and surprising since all measures to stabilise the side slopes had been taken after face mapping, and the grid size, diameter, and depth of the rockbolts had already been readjusted to accommodate rock fractures. The above scenario was baffling even to the best geologists and geotechnical engineers, and it was decided that any further slope stabilisation scheme shall have to be designed in such a way to ensure safe completion of works. Accordingly, following revisions to excavation scheme were made: The excavation would be carried while maintaining a slope based on type of soil/rock. The rock bolt type was changed from SN rockbolts to Self Drilling type anchor. The grid size of the bolts changed on real time assessment. the excavation carried out by implementing a ‘Bench Release Approach’. Aggressive Real Time Instrumentation Scheme. Discussion: The above case Study again asserts vitality of correct interpretation of the geological strata and the need of real time revisions of the construction schemes based on the actual site data. The excavation is successfully being done with the above revised scheme, and further details of the Revised Slope Stabilisation Scheme, Instrumentation Schemes, Monitoring results, along with the actual site photographs, shall form the part of the final Paper.

Keywords: unconfined compressive strength (ucs), rock mass rating (rmr), rock bolts, self drilling anchors, face mapping of rock, secant pile, shotcrete

Procedia PDF Downloads 66
1806 Experimental Investigation of the Impact of Biosurfactants on Residual-Oil Recovery

Authors: S. V. Ukwungwu, A. J. Abbas, G. G. Nasr

Abstract:

The increasing high price of natural gas and oil with attendant increase in energy demand on world markets in recent years has stimulated interest in recovering residual oil saturation across the globe. In order to meet the energy security, efforts have been made in developing new technologies of enhancing the recovery of oil and gas, utilizing techniques like CO2 flooding, water injection, hydraulic fracturing, surfactant flooding etc. Surfactant flooding however optimizes production but poses risk to the environment due to their toxic nature. Amongst proven records that have utilized other type of bacterial in producing biosurfactants for enhancing oil recovery, this research uses a technique to combine biosurfactants that will achieve a scale of EOR through lowering interfacial tension/contact angle. In this study, three biosurfactants were produced from three Bacillus species from freeze dried cultures using sucrose 3 % (w/v) as their carbon source. Two of these produced biosurfactants were screened with the TEMCO Pendant Drop Image Analysis for reduction in IFT and contact angle. Interfacial tension was greatly reduced from 56.95 mN.m-1 to 1.41 mN.m-1 when biosurfactants in cell-free culture (Bacillus licheniformis) were used compared to 4. 83mN.m-1 cell-free culture of Bacillus subtilis. As a result, cell-free culture of (Bacillus licheniformis) changes the wettability of the biosurfactant treatment for contact angle measurement to more water-wet as the angle decreased from 130.75o to 65.17o. The influence of microbial treatment on crushed rock samples was also observed by qualitative wettability experiments. Treated samples with biosurfactants remained in the aqueous phase, indicating a water-wet system. These results could prove that biosurfactants can effectively change the chemistry of the wetting conditions against diverse surfaces, providing a desirable condition for efficient oil transport in this way serving as a mechanism for EOR. The environmental friendly effect of biosurfactants applications for industrial purposes play important advantages over chemically synthesized surfactants, with various possible structures, low toxicity, eco-friendly and biodegradability.

Keywords: bacillus, biosurfactant, enhanced oil recovery, residual oil, wettability

Procedia PDF Downloads 279
1805 Chemical Characterization, Crystallography and Acute Toxicity Evaluation of Two Boronic-Carbohydrate Adducts

Authors: Héctor González Espinosa, Ricardo Ivan Cordova Chávez, Alejandra Contreras Ramos, Itzia Irene Padilla Martínez, José Guadalupe Trujillo Ferrara, Marvin Antonio Soriano Ursúa

Abstract:

Boronic acids are able to create diester bonds with carbohydrates because of their hydroxyl groups; in nature, there are some organoborates with these characteristics, such as the calcium fructoborate, formed by the union of two fructose molecules and a boron atom, synthesized by plants. In addition, it has been observed that, in animal cells only the compounds with cis-diol functional groups are capable of linking to boric or boronic acids. The formation of these organoboron compounds could impair the physical and chemical properties of the precursors, even their acute toxicity. In this project, two carbohydrate-derived boron-containing compounds from D-fructose and D-arabinose and phenylboronic acid are analyzed by different spectroscopy techniques such as Raman, Infrared with Fourier Transform Infrared (FT-IR), Nuclear Magnetic Resonance (NMR) and X-ray diffraction crystallography to describe their chemical characteristics. Also, an acute toxicity test was performed to determine their LD50 using the Lorke’s method. It was confirmed by multiple spectra the formation of the adducts by the generation of the diester bonds with a β-D-pyranose of fructose and arabinose. The most prominent findings were the presence of signals corresponding to the formation of new bonds, like the stretching of B-O bonds, or the absence of signals of functional groups like the hydroxyls presented in the reagents used for the synthesis of the adducts. The NMR spectra yielded information about the stereoselectivity in the synthesis reaction, observed by the interaction of the protons and their vicinal atoms in the anomeric and second position carbons; but also, the absence of a racemic mix by the finding of just one signal in the range for the anomeric carbon in the 13C NMR spectra of both adducts. The acute toxicity tests by the Lorke’s method showed that the LD50 value for both compounds is 1265 mg/kg. Those results let us to propose these adducts as highly safe agents for further biological evaluation with medical purposes.

Keywords: acute toxicity, adduct, boron, carbohydrate, diester bond

Procedia PDF Downloads 65