Search results for: Extended arithmetic precision.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2268

Search results for: Extended arithmetic precision.

198 CsPbBr₃@MOF-5-Based Single Drop Microextraction for in-situ Fluorescence Colorimetric Detection of Dechlorination Reaction

Authors: Yanxue Shang, Jingbin Zeng

Abstract:

Chlorobenzene homologues (CBHs) are a category of environmental pollutants that can not be ignored. They can stay in the environment for a long period and are potentially carcinogenic. The traditional degradation method of CBHs is dechlorination followed by sample preparation and analysis. This is not only time-consuming and laborious, but the detection and analysis processes are used in conjunction with large-scale instruments. Therefore, this can not achieve rapid and low-cost detection. Compared with traditional sensing methods, colorimetric sensing is simpler and more convenient. In recent years, chromaticity sensors based on fluorescence have attracted more and more attention. Compared with sensing methods based on changes in fluorescence intensity, changes in color gradients are easier to recognize by the naked eye. Accordingly, this work proposes to use single drop microextraction (SDME) technology to solve the above problems. After the dechlorination reaction was completed, the organic droplet extracts Cl⁻ and realizes fluorescence colorimetric sensing at the same time. This method was integrated sample processing and visual in-situ detection, simplifying the detection process. As a fluorescence colorimetric sensor material, CsPbBr₃ was encapsulated in MOF-5 to construct CsPbBr₃@MOF-5 fluorescence colorimetric composite. Then the fluorescence colorimetric sensor was constructed by dispersing the composite in SDME organic droplets. When the Br⁻ in CsPbBr₃ exchanges with Cl⁻ produced by the dechlorination reactions, it is converted into CsPbCl₃. The fluorescence color of the single droplet of SDME will change from green to blue emission, thereby realizing visual observation. Therein, SDME can enhance the concentration and enrichment of Cl⁻ and instead of sample pretreatment. The fluorescence color change of CsPbBr₃@MOF-5 can replace the detection process of large-scale instruments to achieve real-time rapid detection. Due to the absorption ability of MOF-5, it can not only improve the stability of CsPbBr₃, but induce the adsorption of Cl⁻. Simultaneously, accelerate the exchange of Br- and Cl⁻ in CsPbBr₃ and the detection process of Cl⁻. The absorption process was verified by density functional theory (DFT) calculations. This method exhibits exceptional linearity for Cl⁻ in the range of 10⁻² - 10⁻⁶ M (10000 μM - 1 μM) with a limit of detection of 10⁻⁷ M. Whereafter, the dechlorination reactions of different kinds of CBHs were also carried out with this method, and all had satisfactory detection ability. Also verified the accuracy by gas chromatography (GC), and it was found that the SDME we developed in this work had high credibility. In summary, the in-situ visualization method of dechlorination reaction detection was a combination of sample processing and fluorescence colorimetric sensing. Thus, the strategy researched herein represents a promising method for the visual detection of dechlorination reactions and can be extended for applications in environments, chemical industries, and foods.

Keywords: chlorobenzene homologues, colorimetric sensor, metal halide perovskite, metal-organic frameworks, single drop microextraction

Procedia PDF Downloads 144
197 Denitrification Diesel Hydrocarbons Using Triethanolamine-Glycerol Deep Eutectic Solvent

Authors: Hocine Sifaoui

Abstract:

The manufacture and marketing of the gasoline and diesel without aromatic compounds, particularly nitrogen heteroaromatics and sulfur heteroaromatics, is the main objective of researchers and the petrochemical industry to reply to the requirements of the environmental protection. This work is part of this line of research and for this a triethanolamine/glycerol (TEoA:Gly) deep eutectic solvent (DES), was used to remove two model nitrogen compounds, pyridine and quinoline from n-decane. Experimentally two liquid-liquid equilibrium systems {n-decane + pyridine/quinoline + DES} were measured at 298.15 K and 1.01 bar using the equilibrium cell method. This study aims to evaluate the potential of this DES as sustainable alternative to organic solvents for the denitrogenation of petroleum feedstocks by liquid-liquid extraction. Experimentally, the DES were prepared by the heating method. Accurately weighed triethanolamine as hydrogen bond acceptor (HBA) and glycerol as hydrogen bond donor (HBD), were placed in a round-bottomed flask. An Ohaus Adventurer balance with a precision of ±0.0001 g was used for weighing the HBA and HBD. The mixtures were then stirred and heated at 343.15 K under atmospheric pressure using a rotary evaporator. The preparation was completed when a clear and homogeneous liquid was obtained. To evaluate the equilibrium behaviour of pseudo-ternary systems {n-decane + pyridine or quinoline + DES}, mixtures were prepared with the nitrogenous compound (pyridine or quinoline) at varying mass percentages in the n-decane, along with a fixed (2:1) ratio between the n-decane and DES phases. Defined amounts of these three components were precisely weighed to achieve mixtures within the biphasic region before vigorous stirring at 400 rpm using an Avantor VWR KS 4000 agitator shaker for 4 hours at 298.15 K, followed by overnight settling to attain thermodynamic equilibrium evidenced by phase separation. Aliquot from the upper phase rich in n-decane and the lower phase rich in DES were carefully weighed. The mass of each sample was precisely recorded for quantification by gas chromatography. The DES content was calculated by mass balance after analysing the composition of the other species such as n-decane, pyridine or quinoline. All samples were diluted with pure ethanol before their analysis by GC. Distribution ratios and selectivities toward pyridine and quinoline compounds were also measured at the same phase molar ratios. The consistency and reliability of the experimental data, were verified and validated by the Othmer-Tobias and Batchman correlations. The experimental results show that the highest value of the partition coefficient =7.08 was obtained with pyridine extraction and the highest selectivity S=801.4 was obtained with quinoline extraction. The experimental liquid-liquid equilibrium data of these ternary systems were correlated by using the Non Random Two-Liquids (NRTL) and COnductor-like Screening MOdel for Real Solvents (COSMO-RS) models. A good agreement with the experimental data was observed with NRTL and COSMO-RS models for the two systems. The performance of this DES was compared to those of ionic liquids and organic solvents reported in the literature.

Keywords: piridyne, quinoline, n-decane, deep eutectic solvent

Procedia PDF Downloads 6
196 Globalization of Pesticide Technology and Sustainable Agriculture

Authors: Gagandeep Kaur

Abstract:

The pesticide industry is a big supplier of agricultural inputs. The uses of pesticides control weeds, fungal diseases, etc., which causes of yield losses in agricultural production. In agribusiness and agrichemical industry, Globalization of markets, competition and innovation are the dominant trends. By the tradition of increasing the productivity of agro-systems through generic, universally applicable technologies, innovation in the agrichemical industry is limited. The marketing of technology of agriculture needs to deal with some various trends such as locally-organized forces that envision regionalized sustainable agriculture in the future. Agricultural production has changed dramatically over the past century. Before World War second agricultural production was featured as a low input of money, high labor, mixed farming and low yields. Although mineral fertilizers were applied already in the second half of the 19th century, most f the crops were restricted by local climatic, geological and ecological conditions. After World War second, in the period of reconstruction, political and socioeconomic pressure changed the nature of agricultural production. For a growing population, food security at low prices and securing farmer income at acceptable levels became political priorities. Current agricultural policy the new European common agricultural policy is aimed to reduce overproduction, liberalization of world trade and the protection of landscape and natural habitats. Farmers have to increase the quality of their productivity and they have to control costs because of increased competition from the world market. Pesticides should be more effective at lower application doses, less toxic and not pose a threat to groundwater. There is a big debate taking place about how and whether to mitigate the intensive use of pesticides. This debate is about the future of agriculture which is sustainable agriculture. This is possible by moving away from conventional agriculture. Conventional agriculture is featured as high inputs and high yields. The use of pesticides in conventional agriculture implies crop production in a wide range. To move away from conventional agriculture is possible through the gradual adoption of less disturbing and polluting agricultural practices at the level of the cropping system. For a healthy environment for crop production in the future there is a need for the maintenance of chemical, physical or biological properties. There is also required to minimize the emission of volatile compounds in the atmosphere. Companies are limiting themselves to a particular interpretation of sustainable development, characterized by technological optimism and production-maximizing. So the main objective of the paper will present the trends in the pesticide industry and in agricultural production in the era of Globalization. The second objective is to analyze sustainable agriculture. Companies of pesticides seem to have identified biotechnology as a promising alternative and supplement to the conventional business of selling pesticides. The agricultural sector is in the process of transforming its conventional mode of operation. Some experts give suggestions to farmers to move towards precision farming and some suggest engaging in organic farming. The methodology of the paper will be historical and analytical. Both primary and secondary sources will be used.

Keywords: globalization, pesticides, sustainable development, organic farming

Procedia PDF Downloads 99
195 From Talk to Action-Tackling Africa’s Pollution and Climate Change Problem

Authors: Ngabirano Levis

Abstract:

One of Africa’s major environmental challenges remains air pollution. In 2017, UNICEF estimated over 400,000 children in Africa died as a result of indoor pollution, while 350 million children remain exposed to the risks of indoor pollution due to the use of biomass and burning of wood for cooking. Over time, indeed, the major causes of mortality across Africa are shifting from the unsafe water, poor sanitation, and malnutrition to the ambient and household indoor pollution, and greenhouse gas (GHG) emissions remain a key factor in this. In addition, studies by the OECD estimated that the economic cost of premature deaths due to Ambient Particulate Matter Pollution (APMP) and Household Air Pollution across Africa in 2013 was about 215 Billion US Dollars and US 232 Billion US Dollars, respectively. This is not only a huge cost for a continent where over 41% of the Sub-Saharan population lives on less than 1.9 US Dollars a day but also makes the people extremely vulnerable to the negative climate change and environmental degradation effects. Such impacts have led to extended droughts, flooding, health complications, and reduced crop yields hence food insecurity. Climate change, therefore, poses a threat to global targets like poverty reduction, health, and famine. Despite efforts towards mitigation, air contributors like carbon dioxide emissions are on a generally upward trajectory across Africa. In Egypt, for instance, emission levels had increased by over 141% in 2010 from the 1990 baseline. Efforts like the climate change adaptation and mitigation financing have also hit obstacles on the continent. The International Community and developed nations stress that Africa still faces challenges of limited human, institutional and financial systems capable of attracting climate funding from these developed economies. By using the qualitative multi-case study method supplemented by interviews of key actors and comprehensive textual analysis of relevant literature, this paper dissects the key emissions and air pollutant sources, their impact on the well-being of the African people, and puts forward suggestions as well as a remedial mechanism to these challenges. The findings reveal that whereas climate change mitigation plans appear comprehensive and good on paper for many African countries like Uganda; the lingering political interference, limited research guided planning, lack of population engagement, irrational resource allocation, and limited system and personnel capacity has largely impeded the realization of the set targets. Recommendations have been put forward to address the above climate change impacts that threaten the food security, health, and livelihoods of the people on the continent.

Keywords: Africa, air pollution, climate change, mitigation, emissions, effective planning, institutional strengthening

Procedia PDF Downloads 84
194 Language in International Students’ Cross-Cultural Adaptation: Case Study of Ukrainian Students in Taiwan and Lithuania

Authors: Min-Hsun Liao

Abstract:

Since the outbreak of war between Russia and Ukraine in February 2022, universities around the world have extended their helping hands to welcome Ukrainian students whose academic careers have been unexpectedly interrupted. Tunghai University (THU) in Taiwan and Mykolas Romeris University (MRU) in Lithuania are among the many other universities offering short- and long-term scholarships to host Ukrainian students in the midst of the war crisis. This mixed-methods study examines the cross-cultural adjustment processes of Ukrainian students in Taiwan. The research team at MRU will also conduct a parallel study with their Ukrainian students. Both institutions are committed to gaining insights into the adjustment processes of these students through cross-institutional collaboration. Studies show that while international students come from different cultural backgrounds, the difficulties they face while studying abroad are comparable and vary in intensity. These difficulties range from learning the language of the host country, adopting cultural customs, and adapting culinary preferences to the sociocultural shock of being separated from family and friends. These problems have been the subject of numerous studies. Study findings indicate that these challenges, if not properly addressed, can lead to significant stress, despair, and failure in academics or other endeavors for international students, not to mention those who have had to leave home involuntarily and settle into a completely new environment. Among these challenges, the language of the host country is foremost. The issue of international students' adjustment, particularly language acquisition, is critical to the psychological, academic, and sociocultural well-being of individuals. Both quantitative and qualitative data will be collected: 1) the International Student Cross-cultural Adaptation Survey (ISCAS) will be distributed to all Ukrainian students in both institutions; 2) one-on-one interviews will be conducted to gain a deeper understanding of their adaptations; and 3) t-tests or ANOVA will be calculated to determine significant differences between the languages used and the adaptation patterns of Ukrainian students. The significance of this study is consistent with three SDGs, namely quality education, peace/justice, and strong institutions and partnerships for the goals. The THU and MRU research teams believe that through partnership, both institutions can benefit exponentially from sharing the data, avoiding fixed interpretation, and sharing contextual insights, which will help improve the overall quality of education for international students and promote peace/justice through strong institutions. The impact of host country language proficiency on academic and sociocultural adjustments remains inconclusive. Therefore, the outcome of the study will shed new light on the relationship between language and various adjustments. In addition, the feedback from Ukrainian students will help other host countries better serve international students who must flee their home countries for an undisturbed education.

Keywords: international students, ukrainian students, cross-cultural adaptation, host country language, acculturation theory

Procedia PDF Downloads 78
193 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses

Authors: André Jesus, Yanjie Zhu, Irwanda Laory

Abstract:

Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.

Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process

Procedia PDF Downloads 328
192 Photonic Dual-Microcomb Ranging with Extreme Speed Resolution

Authors: R. R. Galiev, I. I. Lykov, A. E. Shitikov, I. A. Bilenko

Abstract:

Dual-comb interferometry is based on the mixing of two optical frequency combs with slightly different lines spacing which results in the mapping of the optical spectrum into the radio-frequency domain for future digitizing and numerical processing. The dual-comb approach enables diverse applications, including metrology, fast high-precision spectroscopy, and distance range. Ordinary frequency-modulated continuous-wave (FMCW) laser-based Light Identification Detection and Ranging systems (LIDARs) suffer from two main disadvantages: slow and unreliable mechanical, spatial scan and a rather wide linewidth of conventional lasers, which limits speed measurement resolution. Dual-comb distance measurements with Allan deviations down to 12 nanometers at averaging times of 13 microseconds, along with ultrafast ranging at acquisition rates of 100 megahertz, allowing for an in-flight sampling of gun projectiles moving at 150 meters per second, was previously demonstrated. Nevertheless, pump lasers with EDFA amplifiers made the device bulky and expensive. An alternative approach is a direct coupling of the laser to a reference microring cavity. Backscattering can tune the laser to the eigenfrequency of the cavity via the so-called self-injection locked (SIL) effect. Moreover, the nonlinearity of the cavity allows a solitonic frequency comb generation in the very same cavity. In this work, we developed a fully integrated, power-efficient, electrically driven dual-micro comb source based on the semiconductor lasers SIL to high-quality integrated Si3N4 microresonators. We managed to obtain robust 1400-1700 nm combs generation with a 150 GHz or 1 THz lines spacing and measure less than a 1 kHz Lorentzian withs of stable, MHz spaced beat notes in a GHz band using two separated chips, each pumped by its own, self-injection locked laser. A deep investigation of the SIL dynamic allows us to find out the turn-key operation regime even for affordable Fabry-Perot multifrequency lasers used as a pump. It is important that such lasers are usually more powerful than DFB ones, which were also tested in our experiments. In order to test the advantages of the proposed techniques, we experimentally measured a minimum detectable speed of a reflective object. It has been shown that the narrow line of the laser locked to the microresonator provides markedly better velocity accuracy, showing velocity resolution down to 16 nm/s, while the no-SIL diode laser only allowed 160 nm/s with good accuracy. The results obtained are in agreement with the estimations and open up ways to develop LIDARs based on compact and cheap lasers. Our implementation uses affordable components, including semiconductor laser diodes and commercially available silicon nitride photonic circuits with microresonators.

Keywords: dual-comb spectroscopy, LIDAR, optical microresonator, self-injection locking

Procedia PDF Downloads 73
191 Clinical Validation of an Automated Natural Language Processing Algorithm for Finding COVID-19 Symptoms and Complications in Patient Notes

Authors: Karolina Wieczorek, Sophie Wiliams

Abstract:

Introduction: Patient data is often collected in Electronic Health Record Systems (EHR) for purposes such as providing care as well as reporting data. This information can be re-used to validate data models in clinical trials or in epidemiological studies. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. Mentioning a disease in a discharge letter does not necessarily mean that a patient suffers from this disease. Many of them discuss a diagnostic process, different tests, or discuss whether a patient has a certain disease. The COVID-19 dataset in this study used natural language processing (NLP), an automated algorithm which extracts information related to COVID-19 symptoms, complications, and medications prescribed within the hospital. Free-text patient clinical patient notes are rich sources of information which contain patient data not captured in a structured form, hence the use of named entity recognition (NER) to capture additional information. Methods: Patient data (discharge summary letters) were exported and screened by an algorithm to pick up relevant terms related to COVID-19. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. A list of 124 Systematized Nomenclature of Medicine (SNOMED) Clinical Terms has been provided in Excel with corresponding IDs. Two independent medical student researchers were provided with a dictionary of SNOMED list of terms to refer to when screening the notes. They worked on two separate datasets called "A” and "B”, respectively. Notes were screened to check if the correct term had been picked-up by the algorithm to ensure that negated terms were not picked up. Results: Its implementation in the hospital began on March 31, 2020, and the first EHR-derived extract was generated for use in an audit study on June 04, 2020. The dataset has contributed to large, priority clinical trials (including International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) by bulk upload to REDcap research databases) and local research and audit studies. Successful sharing of EHR-extracted datasets requires communicating the provenance and quality, including completeness and accuracy of this data. The results of the validation of the algorithm were the following: precision (0.907), recall (0.416), and F-score test (0.570). Percentage enhancement with NLP extracted terms compared to regular data extraction alone was low (0.3%) for relatively well-documented data such as previous medical history but higher (16.6%, 29.53%, 30.3%, 45.1%) for complications, presenting illness, chronic procedures, acute procedures respectively. Conclusions: This automated NLP algorithm is shown to be useful in facilitating patient data analysis and has the potential to be used in more large-scale clinical trials to assess potential study exclusion criteria for participants in the development of vaccines.

Keywords: automated, algorithm, NLP, COVID-19

Procedia PDF Downloads 102
190 Temporal and Spatio-Temporal Stability Analyses in Mixed Convection of a Viscoelastic Fluid in a Porous Medium

Authors: P. Naderi, M. N. Ouarzazi, S. C. Hirata, H. Ben Hamed, H. Beji

Abstract:

The stability of mixed convection in a Newtonian fluid medium heated from below and cooled from above, also known as the Poiseuille-Rayleigh-Bénard problem, has been extensively investigated in the past decades. To our knowledge, mixed convection in porous media has received much less attention in the published literature. The present paper extends the mixed convection problem in porous media for the case of a viscoelastic fluid flow owing to its numerous environmental and industrial applications such as the extrusion of polymer fluids, solidification of liquid crystals, suspension solutions and petroleum activities. Without a superimposed through-flow, the natural convection problem of a viscoelastic fluid in a saturated porous medium has already been treated. The effects of the viscoelastic properties of the fluid on the linear and nonlinear dynamics of the thermoconvective instabilities have also been treated in this work. Consequently, the elasticity of the fluid can lead either to a Hopf bifurcation, giving rise to oscillatory structures in the strongly elastic regime, or to a stationary bifurcation in the weakly elastic regime. The objective of this work is to examine the influence of the main horizontal flow on the linear and characteristics of these two types of instabilities. Under the Boussinesq approximation and Darcy's law extended to a viscoelastic fluid, a temporal stability approach shows that the conditions for the appearance of longitudinal rolls are identical to those found in the absence of through-flow. For the general three-dimensional (3D) perturbations, a Squire transformation allows the deduction of the complex frequencies associated with the 3D problem using those obtained by solving the two-dimensional one. The numerical resolution of the eigenvalue problem concludes that the through-flow has a destabilizing effect and selects a convective configuration organized in purely transversal rolls which oscillate in time and propagate in the direction of the main flow. In addition, by using the mathematical formalism of absolute and convective instabilities, we study the nature of unstable three-dimensional disturbances. It is shown that for a non-vanishing through-flow, general three-dimensional instabilities are convectively unstable which means that in the absence of a continuous noise source these instabilities are drifted outside the porous medium, and no long-term pattern is observed. In contrast, purely transversal rolls may exhibit a transition to absolute instability regime and therefore affect the porous medium everywhere including in the absence of a noise source. The absolute instability threshold, the frequency and the wave number associated with purely transversal rolls are determined as a function of the Péclet number and the viscoelastic parameters. Results are discussed and compared to those obtained from laboratory experiments in the case of Newtonian fluids.

Keywords: instability, mixed convection, porous media, and viscoelastic fluid

Procedia PDF Downloads 341
189 Mechanical Properties and Antibiotic Release Characteristics of Poly(methyl methacrylate)-based Bone Cement Formulated with Mesoporous Silica Nanoparticles

Authors: Kumaran Letchmanan, Shou-Cang Shen, Wai Kiong Ng

Abstract:

Postoperative implant-associated infections in soft tissues and bones remain a serious complication in orthopaedic surgery, which leads to impaired healing, re-implantation, prolong hospital stay and increase cost. Drug-loaded implants with sustained release of antibiotics at the local site are current research interest to reduce the risk of post-operative infections and osteomyelitis, thus, minimize the need for follow-up care and increase patient comfort. However, the improved drug release of the drug-loaded bone cements is usually accompanied by a loss in mechanical strength, which is critical for weight-bearing bone cement. Recently, more attempts have been undertaken to develop techniques to enhance the antibiotic elution as well as preserve the mechanical properties of the bone cements. The present study investigates the potential influence of addition of mesoporous silica nanoparticles (MSN) on the in vitro drug release kinetics of gentamicin (GTMC), along with the mechanical properties of bone cements. Simplex P was formulated with MSN and loaded with GTMC by direct impregnation. Meanwhile, Simplex P with water soluble poragen (xylitol) and high loading of GTMC as well as commercial bone cement CMW Smartset GHV were used as controls. MSN-formulated bone cements are able to increase the drug release of GTMC by 3-fold with a cumulative release of more than 46% as compared with other control groups. Furthermore, a sustained release could be achieved for two months. The loaded nano-sized MSN with uniform pore channels significantly build up an effective nano-network path in the bone cement facilitates the diffusion and extended release of GTMC. Compared with formulations using xylitol and high GTMC loading, incorporation of MSN shows no detrimental effect on biomechanical properties of the bone cements as no significant changes in the mechanical properties as compared with original bone cement. After drug release for two months, the bending modulus of MSN-formulated bone cements is 4.49 ± 0.75 GPa and the compression strength is 92.7 ± 2.1 MPa (similar to the compression strength of Simplex-P: 93.0 ± 1.2 MPa). The unaffected mechanical properties of MSN-formulated bone cements was due to the unchanged microstructures of bone cement, whereby more than 98% of MSN remains in the matrix and supports the bone cement structures. In contrast, the large portions of extra voids can be observed for the formulations using xylitol and high drug loading after the drug release study, thus caused compressive strength below the ASTM F541 and ISO 5833 minimum of 70 MPa. These results demonstrate the potential applicability of MSN-functionalized poly(methyl methacrylate)-based bone cement as a highly efficient, sustained and local drug delivery system with good mechanical properties.

Keywords: antibiotics, biomechanical properties, bone cement, sustained release

Procedia PDF Downloads 257
188 Understanding the Diversity of Antimicrobial Resistance among Wild Animals, Livestock and Associated Environment in a Rural Ecosystem in Sri Lanka

Authors: B. M. Y. I. Basnayake, G. G. T. Nisansala, P. I. J. B. Wijewickrama, U. S. Weerathunga, K. W. M. Y. D. Gunasekara, N. K. Jayasekera, A. W. Kalupahana, R. S. Kalupahana, A. Silva- Fletcher, K. S. A. Kottawatta

Abstract:

Antimicrobial resistance (AMR) has attracted significant attention worldwide as an emerging threat to public health. Understanding the role of livestock and wildlife with the shared environment in the maintenance and transmission of AMR is of utmost importance due to its interactions with humans for combating the issue in one health approach. This study aims to investigate the extent of AMR distribution among wild animals, livestock, and environment cohabiting in a rural ecosystem in Sri Lanka: Hambegamuwa. One square km area at Hambegamuwa was mapped using GPS as the sampling area. The study was conducted for a period of five months from November 2020. Voided fecal samples were collected from 130 wild animals, 123 livestock: buffalo, cattle, chicken, and turkey, with 36 soil and 30 water samples associated with livestock and wildlife. From the samples, Escherichia coli (E. coli) was isolated, and their AMR profiles were investigated for 12 antimicrobials using the disk diffusion method following the CLSI standard. Seventy percent (91/130) of wild animals, 93% (115/123) of livestock, 89% (32/36) of soil, and 63% (19/30) of water samples were positive for E. coli. Maximum of two E. coli from each sample to a total of 467 were tested for the sensitivity of which 157, 208, 62, and 40 were from wild animals, livestock, soil, and water, respectively. The highest resistance in E. coli from livestock (13.9%) and wild animals (13.3%) was for ampicillin, followed by streptomycin. Apart from that, E. coli from livestock and wild animals revealed resistance mainly against tetracycline, cefotaxime, trimethoprim/ sulfamethoxazole, and nalidixic acid at levels less than 10%. Ten cefotaxime resistant E. coli were reported from wild animals, including four elephants, two land monitors, a pigeon, a spotted dove, and a monkey which was a significant finding. E. coli from soil samples reflected resistance primarily against ampicillin, streptomycin, and tetracycline at levels less than in livestock/wildlife. Two water samples had cefotaxime resistant E. coli as the only resistant isolates out of 30 water samples tested. Of the total E. coli isolates, 6.4% (30/467) was multi-drug resistant (MDR) which included 18, 9, and 3 isolates from livestock, wild animals, and soil, respectively. Among 18 livestock MDRs, the highest (13/ 18) was from poultry. Nine wild animal MDRs were from spotted dove, pigeon, land monitor, and elephant. Based on CLSI standard criteria, 60 E. coli isolates, of which 40, 16, and 4 from livestock, wild animal, and environment, respectively, were screened for Extended Spectrum β-Lactamase (ESBL) producers. Despite being a rural ecosystem, AMR and MDR are prevalent even at low levels. E. coli from livestock, wild animals, and the environment reflected a similar spectrum of AMR where ampicillin, streptomycin, tetracycline, and cefotaxime being the predominant antimicrobials of resistance. Wild animals may have acquired AMR via direct contact with livestock or via the environment, as antimicrobials are rarely used in wild animals. A source attribution study including the effects of the natural environment to study AMR can be proposed as this less contaminated rural ecosystem alarms the presence of AMR.

Keywords: AMR, Escherichia coli, livestock, wildlife

Procedia PDF Downloads 220
187 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning

Authors: Ali Kazemi

Abstract:

The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.

Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis

Procedia PDF Downloads 59
186 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 82
185 Impact of National Institutions on Corporate Social Performance

Authors: Debdatta Mukherjee, Abhiman Das, Amit Garg

Abstract:

In recent years, there is a growing interest about corporate social responsibility of firms in both academic literature and business world. Since business forms a part of society incorporating socio-environment concerns into its value chain, activities are vital for ensuring mutual sustainability and prosperity. But, until now most of the works have been either descriptive or normative rather than positivist in tone. Even the few ones with a positivist approach have mostly studied the link between corporate financial performance and corporate social performance. However, these studies have been severely criticized by many eminent authors on grounds that they lack a theoretical basis for their findings. They have also argued that apart from corporate financial performance, there must be certain other crucial influences that are likely to determine corporate social performance of firms. In fact, several studies have indicated that firms operating in distinct national institutions show significant variations in the corporate social responsibility practices that they undertake. This clearly suggests that the institutional context of a country in which the firms operate is a key determinant of corporate social performance of firms. Therefore, this paper uses an institutional framework to understand why corporate social performance of firms vary across countries. It examines the impact of country level institutions on corporate social performance using a sample of 3240 global publicly-held firms across 33 countries covering the period 2010-2015. The country level institutions include public institutions, private institutions, markets and capacity to innovate. Econometric Analysis has been mainly used to assess this impact. A three way panel data analysis using fixed effects has been used to test and validate appropriate hypotheses. Most of the empirical findings confirm our hypotheses and the economic significance indicates the specific impact of each variable and their importance relative to others. The results suggest that institutional determinants like ethical behavior of private institutions, goods market, labor market and innovation capacity of a country are significantly related to the corporate social performance of firms. Based on our findings, few implications for policy makers from across the world have also been suggested. The institutions in a country should promote competition. The government should use policy levers for upgrading home demands, like setting challenging yet flexible safety, quality and environment standards, and framing policies governing buyer information, providing innovative recourses to low quality goods and services and promoting early adoption of new and technologically advanced products. Moreover, the institution building in a country should be such that they facilitate and improve the capacity of firms to innovate. Therefore, the proposed study argues that country level institutions impact corporate social performance of firms, empirically validates the same, suggest policy implications and attempts to contribute to an extended understanding of corporate social responsibility and corporate social performance in a multinational context.

Keywords: corporate social performance, corporate social responsibility, institutions, markets

Procedia PDF Downloads 168
184 Alternate Optical Coherence Tomography Technologies in Use for Corneal Diseases Diagnosis in Dogs and Cats

Authors: U. E. Mochalova, A. V. Demeneva, Shilkin A. G., J. Yu. Artiushina

Abstract:

Objective. In medical ophthalmology OCT has been actively used in the last decade. It is a modern non-invasive method of high-precision hardware examination, which gives a detailed cross-sectional image of eye tissues structure with a high level of resolution, which provides in vivo morphological information at the microscopic level about corneal tissue, structures of the anterior segment, retina and optic nerve. The purpose of this study was to explore the possibility of using the OCT technology in complex ophthalmological examination in dogs and cats, to characterize the revealed pathological structural changes in corneal tissue in cats and dogs with some of the most common corneal diseases. Procedures. Optical coherence tomography of the cornea was performed in 112 animals: 68 dogs and 44 cats. In total, 224 eyes were examined. Pathologies of the organ of vision included: dystrophy and degeneration of the cornea, endothelial corneal dystrophy, dry eye syndrome, chronic superficial vascular keratitis, pigmented keratitis, corneal erosion, ulcerative stromal keratitis, corneal sequestration, chronic glaucoma and also postoperative period after performed keratoplasty. When performing OCT, we used certified medical devices: "Huvitz HOCT-1/1F», «Optovue iVue 80» and "SOCT Copernicus Revo (60)". Results. The results of a clinical study on the use of optical coherence tomography (OCT)of the cornea in cats and dogs, performed by the authors of the article in the complex diagnosis of keratopathies of variousorigins: endothelial corneal dystrophy, pigmented keratitis, chronic keratoconjunctivitis, chronic herpetic keratitis, ulcerative keratitis, traumatic corneal damage, sequestration of the cornea of cats, chronic keratitis, complicating the course of glaucoma. The characteristics of the OCT scans are givencorneas of cats and dogs that do not have corneal pathologies. OCT scans of various corneal pathologies in dogs and cats with a description of the revealed pathological changes are presented. Of great clinical interest are the data obtained during OCT of the cornea of animals undergoing keratoplasty operations using various forms of grafts. Conclusions. OCT makes it possible to assess the thickness and pathological structural changes of the corneal surface epithelium, corneal stroma and descemet membrane. We can measure them, determine the exact localization, and record pathological changes. Clinical observation of the dynamics of the pathological process in the cornea using OCT makes it possible to evaluate the effectiveness of drug treatment. In case of negative dynamics of corneal disease, it is necessary to determine the indications for surgical treatment (to assess the thickness of the cornea, the localization of its thinning zones, to characterize the depth and area of pathological changes). According to the OCT of the cornea, it is possible to choose the optimal surgical treatment for the patient, the technique and depth of optically constructive surgery (penetrating or anterior lamellar keratoplasty).; determine the depth and diameter of the planned microsurgical trepanation of corneal tissue, which will ensure good adaptation of the edges of the donor material.

Keywords: optical coherence tomography, corneal sequestration, optical coherence tomography of the cornea, corneal transplantation, cat, dog

Procedia PDF Downloads 71
183 Landslide Hazard Assessment Using Physically Based Mathematical Models in Agricultural Terraces at Douro Valley in North of Portugal

Authors: C. Bateira, J. Fernandes, A. Costa

Abstract:

The Douro Demarked Region (DDR) is a production Porto wine region. On the NE of Portugal, the strong incision of the Douro valley developed very steep slopes, organized with agriculture terraces, have experienced an intense and deep transformation in order to implement the mechanization of the work. The old terrace system, based on stone vertical wall support structure, replaced by terraces with earth embankments experienced a huge terrace instability. This terrace instability has important economic and financial consequences on the agriculture enterprises. This paper presents and develops cartographic tools to access the embankment instability and identify the area prone to instability. The priority on this evaluation is related to the use of physically based mathematical models and develop a validation process based on an inventory of the past embankment instability. We used the shallow landslide stability model (SHALSTAB) based on physical parameters such us cohesion (c’), friction angle(ф), hydraulic conductivity, soil depth, soil specific weight (ϱ), slope angle (α) and contributing areas by Multiple Flow Direction Method (MFD). A terraced area can be analysed by this models unless we have very detailed information representative of the terrain morphology. The slope angle and the contributing areas depend on that. We can achieve that propose using digital elevation models (DEM) with great resolution (pixel with 40cm side), resulting from a set of photographs taken by a flight at 100m high with pixel resolution of 12cm. The slope angle results from this DEM. In the other hand, the MFD contributing area models the internal flow and is an important element to define the spatial variation of the soil saturation. That internal flow is based on the DEM. That is supported by the statement that the interflow, although not coincident with the superficial flow, have important similitude with it. Electrical resistivity monitoring values which related with the MFD contributing areas build from a DEM of 1m resolution and revealed a consistent correlation. That analysis, performed on the area, showed a good correlation with R2 of 0,72 and 0,76 at 1,5m and 2m depth, respectively. Considering that, a DEM with 1m resolution was the base to model the real internal flow. Thus, we assumed that the contributing area of 1m resolution modelled by MFD is representative of the internal flow of the area. In order to solve this problem we used a set of generalized DEMs to build the contributing areas used in the SHALSTAB. Those DEMs, with several resolutions (1m and 5m), were built from a set of photographs with 50cm resolution taken by a flight with 5km high. Using this maps combination, we modelled several final maps of terrace instability and performed a validation process with the contingency matrix. The best final instability map resembles the slope map from a DEM of 40cm resolution and a MFD map from a DEM of 1m resolution with a True Positive Rate (TPR) of 0,97, a False Positive Rate of 0,47, Accuracy (ACC) of 0,53, Precision (PVC) of 0,0004 and a TPR/FPR ratio of 2,06.

Keywords: agricultural terraces, cartography, landslides, SHALSTAB, vineyards

Procedia PDF Downloads 179
182 Behavioral and EEG Reactions in Native Turkic-Speaking Inhabitants of Siberia and Siberian Russians during Recognition of Syntactic Errors in Sentences in Native and Foreign Languages

Authors: Tatiana N. Astakhova, Alexander E. Saprygin, Tatyana A. Golovko, Alexander N. Savostyanov, Mikhail S. Vlasov, Natalia V. Borisova, Alexandera G. Karpova, Urana N. Kavai-ool, Elena D. Mokur-ool, Nikolay A. Kolchanov, Lubomir I. Aftanas

Abstract:

The aim of the study is to compare behaviorally and EEG reactions in Turkic-speaking inhabitants of Siberia (Tuvinians and Yakuts) and Russians during the recognition of syntax errors in native and foreign languages. 63 healthy aboriginals of the Tyva Republic, 29 inhabitants of the Sakha (Yakutia) Republic, and 55 Russians from Novosibirsk participated in the study. All participants completed a linguistic task, in which they had to find a syntax error in the written sentences. Russian participants completed the task in Russian and in English. Tuvinian and Yakut participants completed the task in Russian, English, and Tuvinian or Yakut, respectively. EEG’s were recorded during the solving of tasks. For Russian participants, EEG's were recorded using 128-channels. The electrodes were placed according to the extended International 10-10 system, and the signals were amplified using ‘Neuroscan (USA)’ amplifiers. For Tuvinians and Yakuts EEG's were recorded using 64-channels and amplifiers Brain Products, Germany. In all groups 0.3-100 Hz analog filtering, sampling rate 1000 Hz were used. Response speed and the accuracy of recognition error were used as parameters of behavioral reactions. Event-related potentials (ERP) responses P300 and P600 were used as indicators of brain activity. The accuracy of solving tasks and response speed in Russians were higher for Russian than for English. The P300 amplitudes in Russians were higher for English; the P600 amplitudes in the left temporal cortex were higher for the Russian language. Both Tuvinians and Yakuts have no difference in accuracy of solving tasks in Russian and in their respective national languages (Tuvinian and Yakut). However, the response speed was faster for tasks in Russian than for tasks in their national language. Tuvinians and Yakuts showed bad accuracy in English, but the response speed was higher for English than for Russian and the national languages. With Tuvinians, there were no differences in the P300 and P600 amplitudes and in cortical topology for Russian and Tuvinian, but there was a difference for English. In Yakuts, the P300 and P600 amplitudes and topology of ERP for Russian were the same as Russians had for Russian. In Yakuts, brain reactions during Yakut and English comprehension had no difference and were reflected foreign language comprehension -while the Russian language comprehension was reflected native language comprehension. We found out that the Tuvinians recognized both Russian and Tuvinian as native languages, and English as a foreign language. The Yakuts recognized both English and Yakut as a foreign language, only Russian as a native language. According to the inquirer, both Tuvinians and Yakuts use the national language as a spoken language, whereas they don’t use it for writing. It can well be a reason that Yakuts perceive the Yakut writing language as a foreign language while writing Russian as their native.

Keywords: EEG, language comprehension, native and foreign languages, Siberian inhabitants

Procedia PDF Downloads 534
181 Urban Growth and Its Impact on Natural Environment: A Geospatial Analysis of North Part of the UAE

Authors: Mohamed Bualhamam

Abstract:

Due to the complex nature of tourism resources of the Northern part of the United Arab Emirates (UAE), the potential of Geographical Information Systems (GIS) and Remote Sensing (RS) in resolving these issues was used. The study was an attempt to use existing GIS data layers to identify sensitive natural environment and archaeological heritage resources that may be threatened by increased urban growth and give some specific recommendations to protect the area. By identifying sensitive natural environment and archaeological heritage resources, public agencies and citizens are in a better position to successfully protect important natural lands and direct growth away from environmentally sensitive areas. The paper concludes that applications of GIS and RS in study of urban growth impact in tourism resources are a strong and effective tool that can aid in tourism planning and decision-making. The study area is one of the fastest growing regions in the country. The increase in population along the region, as well as rapid growth of towns, has increased the threat to natural resources and archeological sites. Satellite remote sensing data have been proven useful in assessing the natural resources and in monitoring the changes. The study used GIS and RS to identify sensitive natural environment and archaeological heritage resources that may be threatened by increased urban growth. The result of GIS analyses shows that the Northern part of the UAE has variety for tourism resources, which can use for future tourism development. Rapid urban development in the form of small towns and different economic activities are showing in different places in the study area. The urban development extended out of old towns and have negative affected of sensitive tourism resources in some areas. Tourism resources for the Northern part of the UAE is a highly complex resources, and thus requires tools that aid in effective decision making to come to terms with the competing economic, social, and environmental demands of sustainable development. The UAE government should prepare a tourism databases and a GIS system, so that planners can be accessed for archaeological heritage information as part of development planning processes. Applications of GIS in urban planning, tourism and recreation planning illustrate that GIS is a strong and effective tool that can aid in tourism planning and decision- making. The power of GIS lies not only in the ability to visualize spatial relationships, but also beyond the space to a holistic view of the world with its many interconnected components and complex relationships. The worst of the damage could have been avoided by recognizing suitable limits and adhering to some simple environmental guidelines and standards will successfully develop tourism in sustainable manner. By identifying sensitive natural environment and archaeological heritage resources of the Northern part of the UAE, public agencies and private citizens are in a better position to successfully protect important natural lands and direct growth away from environmentally sensitive areas.

Keywords: GIS, natural environment, UAE, urban growth

Procedia PDF Downloads 265
180 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans

Authors: Jian Wu

Abstract:

Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.

Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory

Procedia PDF Downloads 187
179 E-Business Role in the Development of the Economy of Sultanate of Oman

Authors: Mairaj Salim, Asma Zaheer

Abstract:

Oman has accomplished as much or more than its fellow Gulf monarchies, despite starting from scratch considerably later, having less oil income to utilize, dealing with a larger and more rugged geography, and resolving a bitter civil war along the way. Of course, Oman's progress in the past 30-plus years has not been without problems and missteps, but the balance is squarely on the positive side of the ledger. Oil has been the driving force of the Omani economy since Oman began commercial production in 1967. The oil industry supports the country’s high standard of living and is primarily responsible for its modern and expansive infrastructure, including electrical utilities, telephone services, roads, public education and medical services. In addition to extensive oil reserves, Oman also has substantial natural gas reserves, which are expected to play a leading role in the Omani economy in the Twenty-first Century. To reduce the country’s dependence on oil revenues, the government is restructuring the economy by directing investment to non-oil activities. Since the 21st century IT has changed the performing tasks. To manage the affairs for the benefits of organizations and economy, the Omani government has adopted E-Business technologies for the development. E-Business is important because it allows • Transformation of old economy relationships (vertical/linear relationships) to new economy relationships characterized by end-to-end relationship management solutions (integrated or extended relationships) • Facilitation and organization of networks, small firms depend on ‘partner’ firms for supplies and product distribution to meet customer demands • SMEs to outsource back-end process or cost centers enabling the SME to focus on their core competence • ICT to connect, manage and integrate processes internally and externally • SMEs to join networks and enter new markets, through shortened supply chains to increase market share, customers and suppliers • SMEs to take up the benefits of e-business to reduce costs, increase customer satisfaction, improve client referral and attract quality partners • New business models of collaboration for SMEs to increase their skill base • SMEs to enter virtual trading arena and increase their market reach A national strategy for the advancement of information and communication technology (ICT) has been worked out, mainly to introduce e-government, e-commerce, and a digital society. An information technology complex KOM (Knowledge Oasis Muscat) had been established, consisting of section for information technology, incubator services, a shopping center of technology software and hardware, ICT colleges, E-Government services and other relevant services. So, all these efforts play a vital role in the development of Oman economy.

Keywords: ICT, ITA, CRM, SCM, ERP, KOM, SMEs, e-commerce and e-business

Procedia PDF Downloads 252
178 Intensive Care Nursing Experience of a Lung Cancer Patient Receiving Palliative

Authors: Huang Wei-Yi

Abstract:

Objective: This article explores the intensive care nursing experience of a terminal lung cancer patient who received palliative care after tracheal intubation. The patient was nearing death, and the family experienced sadness and grief as they faced the patient’s deteriorating condition and impending death. Methods: The patient was diagnosed with lung cancer in 2018 and received chemotherapy and radiation therapy with regular outpatient follow-ups. Due to brain metastasis and recent poor pain control and treatment outcomes, the patient was admitted to the intensive care unit (ICU), where the tracheal tube was removed, and palliative care was initiated. During the care period, a holistic assessment was conducted, addressing the physical, psychological, social, and spiritual aspects of care. Medical records were reviewed, interviews and family meetings were held, and a comprehensive assessment was carried out by the critical care team in collaboration with the palliative care team. The primary nursing issues identified included pain, ineffective breathing patterns, fear of death, and altered tissue perfusion. Results: Throughout the care process, the palliative care nurse, along with the family, utilized listening, caring, companionship, pain management, essential oil massage, distraction, and comfortable positioning to alleviate the patient’s pain and breathing difficulties. The use of Morphine 6mg in 0.9% N/S 50ml IV drip q6h reduced the FLACC pain score from 6 to 3. The patient’s respiratory rate improved from 28 breaths/min to 18-22 breaths/min, and sleep duration increased from 4 to 7 uninterrupted hours. The holistic palliative care approach, coupled with the involvement of the palliative care team, facilitated expressions of gratitude, apologies, and love between the patient and family. Visiting hours were extended, and with the nurse’s assistance, these moments were recorded and shared with the patient’s consent, providing cherished memories for the family. The patient’s end-of-life experience was thus improved, and the family was able to find peace. This case also served to promote the concept of palliative care, ensuring that more patients and families receive high-quality nursing care. Conclusion: When caring for terminal patients, collaboration with the palliative care team, including social workers, clergy, psychologists, and nutritionists, is essential. Involving the family in decision-making and providing opportunities for closeness and expressions of gratitude improve personalized care and enhance the patient's quality of life. Upon transferring to the ward, the patient’s hemodynamic stability was maintained, including SBP 110-130 mmHg, respiratory rate 20-22 breaths/min, and pain score <3. The patient was later discharged and transitioned to home hospice care for ongoing support.

Keywords: intensive care, lung cancer, palliative care, ICU

Procedia PDF Downloads 30
177 Synthesis and Characterization of pH-Sensitive Graphene Quantum Dot-Loaded Metal-Organic Frameworks for Targeted Drug Delivery and Fluorescent Imaging

Authors: Sayed Maeen Badshah, Kuen-Song Lin, Abrar Hussain, Jamshid Hussain

Abstract:

Liver cancer is a significant global health issue, ranking fifth in incidence and second in mortality. Effective therapeutic strategies are urgently needed to combat this disease, particularly in regions with high prevalence. This study focuses on developing and characterizing fluorescent organometallic frameworks as distinct drug delivery carriers with potential applications in both the treatment and biological imaging of liver cancer. This work introduces two distinct organometallic frameworks: the cake-shaped GQD@NH₂-MIL-125 and the cross-shaped M8U6/FM8U6. The GQD@NH₂-MIL-125 framework is particularly noteworthy for its high fluorescence, making it an effective tool for biological imaging. X-ray diffraction (XRD) analysis revealed specific diffraction peaks at 6.81ᵒ (011), 9.76ᵒ (002), and 11.69ᵒ (121), with an additional significant peak at 26ᵒ (2θ), corresponding to the carbon material. Morphological analysis using Field Emission Scanning Electron Microscopy (FE-SEM), and Transmission Electron Microscopy (TEM) demonstrated that the framework has a front particle size of 680 nm and a side particle size of 55±5 nm. High-resolution TEM (HR-TEM) images confirmed the successful attachment of graphene quantum dots (GQDs) onto the NH2-MIL-125 framework. Fourier-Transform Infrared (FT-IR) spectroscopy identified crucial functional groups within the GQD@NH₂-MIL-125 structure, including O-Ti-O metal bonds within the 500 to 700 cm⁻¹ range, and N-H and C-N bonds at 1,646 cm⁻¹ and 1,164 cm⁻¹, respectively. BET isotherm analysis further revealed a specific surface area of 338.1 m²/g and an average pore size of 46.86 nm. This framework also demonstrated UV-active properties, as identified by UV-visible light spectra, and its photoluminescence (PL) spectra showed an emission peak around 430 nm when excited at 350 nm, indicating its potential as a fluorescent drug delivery carrier. In parallel, the cross-shaped M8U6/FM8U6 frameworks were synthesized and characterized using X-ray diffraction, which identified distinct peaks at 2θ = 7.4 (111), 8.5 (200), 9.2 (002), 10.8 (002), 12.1 (220), 16.7 (103), and 17.1 (400). FE-SEM, HR-TEM, and TEM analyses revealed particle sizes of 350±50 nm for M8U6 and 200±50 nm for FM8U6. These frameworks, synthesized from terephthalic acid (H₂BDC), displayed notable vibrational bonds, such as C=O at 1,650 cm⁻¹, Fe-O in MIL-88 at 520 cm⁻¹, and Zr-O in UIO-66 at 482 cm⁻¹. BET analysis showed specific surface areas of 740.1 m²/g with a pore size of 22.92 nm for M8U6 and 493.9 m²/g with a pore size of 35.44 nm for FM8U6. Extended X-ray Absorption Fine Structure (EXAFS) spectra confirmed the stability of Ti-O bonds in the frameworks, with bond lengths of 2.026 Å for MIL-125, 1.962 Å for NH₂-MIL-125, and 1.817 Å for GQD@NH₂-MIL-125. These findings highlight the potential of these organometallic frameworks for enhanced liver cancer therapy through precise drug delivery and imaging, representing a significant advancement in nanomaterial applications in biomedical science.

Keywords: liver cancer cells, metal organic frameworks, Doxorubicin (DOX), drug release.

Procedia PDF Downloads 15
176 Genome-Wide Homozygosity Analysis of the Longevous Phenotype in the Amish Population

Authors: Sandra Smieszek, Jonathan Haines

Abstract:

Introduction: Numerous research efforts have focused on searching for ‘longevity genes’. However, attempting to decipher the genetic component of the longevous phenotype have resulted in limited success and the mechanisms governing longevity remain to be explained. We conducted a genome-wide homozygosity analysis (GWHA) of the founder population of the Amish community in central Ohio. While genome-wide association studies using unrelated individuals have revealed many interesting longevity associated variants, these variants are typically of small effect and cannot explain the observed patterns of heritability for this complex trait. The Amish provide a large cohort of extended kinships allowing for in depth analysis via family-based approach excellent population due to its. Heritability of longevity increases with age with significant genetic contribution being seen in individuals living beyond 60 years of age. In our present analysis we show that the heritability of longevity is estimated to be increasing with age particularly on the paternal side. Methods: The present analysis integrated both phenotypic and genotypic data and led to the discovery of a series of variants, distinct for stratified populations across ages and distinct for paternal and maternal cohorts. Specifically 5437 subjects were analyzed and a subset of 893 successfully genotyped individuals was used to assess CHIP heritability. We have conducted the homozygosity analysis to examine if homozygosity is associated with increased risk of living beyond 90. We analyzed AMISH cohort genotyped for 614,957 SNPs. Results: We delineated 10 significant regions of homozygosity (ROH) specific for the age group of interest (>90). Of particular interest was ROH on chromosome 13, P < 0.0001. The lead SNPs rs7318486 and rs9645914 point to COL4A2 and our lead SNP. COL25A1 encodes one of the six subunits of type IV collagen, the C-terminal portion of the protein, known as canstatin, is an inhibitor of angiogenesis and tumor growth. COL4A2 mutations have been reported with a broader spectrum of cerebrovascular, renal, ophthalmological, cardiac, and muscular abnormalities. The second region of interest points to IRS2. Furthermore we built a classifier using the obtained SNPs from the significant ROH region with 0.945 AUC giving ability to discriminate between those living beyond to 90 years of age and beyond. Conclusion: In conclusion our results suggest that a history of longevity does indeed contribute to increasing the odds of individual longevity. Preliminary results are consistent with conjecture that heritability of longevity is substantial when we start looking at oldest fifth and smaller percentiles of survival specifically in males. We will validate all the candidate variants in independent cohorts of centenarians, to test whether they are robustly associated with human longevity. The identified regions of interest via ROH analysis could be of profound importance for the understanding of genetic underpinnings of longevity.

Keywords: regions of homozygosity, longevity, SNP, Amish

Procedia PDF Downloads 235
175 Advancing Equitable Healthcare for Trans and Gender-Diverse Students: A Community-Based Participatory Action Project

Authors: Al Huuskonen, Clio Lake, K. M. Naude, Polina Petlitsyna, Sorsha Henning, Julia Wimmers-Klick

Abstract:

This project presents the outcomes of a community-based participatory action initiative aimed at advocating for equitable healthcare and human rights for trans, two-spirit, and gender-diverse individuals, building upon the University of British Columbia (UBC) Trans Coalition's ongoing efforts. Participatory Action Research (PAR) was chosen as the research method with the goal of improving trans rights on the UBC campus, particularly regarding equitable access to healthcare. PAR involves active community contribution throughout the research process, which in this case was done by way of liaising with student resource groups and advocacy leaders. The goals of this project were as follows: a) identify gaps in gender-affirming healthcare for UBC students by consulting the community and collaborating with UBC services, b) develop an information package outlining provincial and university-based health insurance for gender-affirming care (including hormone therapy and surgeries), FAQs, and resources for UBC's trans students, c) make this package available to UBC students and other national transgender advocacy organizations. The initiative successfully expanded the UBC AMS Student Health and Dental Plan to include gender-affirming procedural coverage, developed a care access guide for students, and advocated for improved health records inclusivity, mechanisms for trans students to report negative care experiences, and increased access to gender-affirming primary care through the on-campus health clinic. Collaboration with other universities' pride organizations and Trans Care BC yielded positive outcomes through broader coalition building and resource sharing. Ongoing efforts are underway to update provincial policies, particularly through expanding coverage under fair pharma care and addressing the compounding effects of the primary care crisis for trans individuals. The project's tangible results include improved trans rights on campus, especially in terms of healthcare access. Expanding healthcare coverage through student care benefits thousands of students, making the ability to undergo important affirming procedures more affordable. Providing students with information on extended coverage options and communication with their doctors further removes barriers to care and positively impacts student wellbeing. This initiative demonstrates the effectiveness of community-based participatory action in advancing equitable healthcare for trans and gender-diverse individuals and serves as a model for other institutions and organizations striving to promote inclusivity and advocate for marginalized populations' rights.

Keywords: equitable healthcare, trans and gender-diverse individuals, inclusivity, participatory action research project

Procedia PDF Downloads 94
174 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 133
173 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 120
172 Enhancing of Antibacterial Activity of Essential Oil by Rotating Magnetic Field

Authors: Tomasz Borowski, Dawid Sołoducha, Agata Markowska-Szczupak, Aneta Wesołowska, Marian Kordas, Rafał Rakoczy

Abstract:

Essential oils (EOs) are fragrant volatile oils obtained from plants. These are used for cooking (for flavor and aroma), cleaning, beauty (e.g., rosemary essential oil is used to promote hair growth), health (e.g. thyme essential oil cures arthritis, normalizes blood pressure, reduces stress on the heart, cures chest infection and cough) and in the food industry as preservatives and antioxidants. Rosemary and thyme essential oils are considered the most eminent herbs based on their history and medicinal properties. They possess a wide range of activity against different types of bacteria and fungi compared with the other oils in both in vitro and in vivo studies. However, traditional uses of EOs are limited due to rosemary and thyme oils in high concentrations can be toxic. In light of the accessible data, the following hypothesis was put forward: Low frequency rotating magnetic field (RMF) increases the antimicrobial potential of EOs. The aim of this work was to investigate the antimicrobial activity of commercial Salvia Rosmarinus L. and Thymus vulgaris L. essential oil from Polish company Avicenna-Oil under Rotating Magnetic Field (RMF) at f = 25 Hz. The self-constructed reactor (MAP) was applied for this study. The chemical composition of oils was determined by gas chromatography coupled with mass spectrometry (GC-MS). Model bacteria Escherichia coli K12 (ATCC 25922) was used. Minimum inhibitory concentrations (MIC) against E. coli were determined for the essential oils. Tested oils in very small concentrations were prepared (from 1 to 3 drops of essential oils per 3 mL working suspensions). From the results of disc diffusion assay and MIC tests, it can be concluded that thyme oil had the highest antibacterial activity against E. coli. Moreover, the study indicates the exposition to the RMF, as compared to the unexposed controls causing an increase in the efficacy of antibacterial properties of tested oils. The extended radiation exposure to RMF at the frequency f= 25 Hz beyond 160 minutes resulted in a significant increase in antibacterial potential against E. coli. Bacteria were killed within 40 minutes in thyme oil in lower tested concentration (1 drop of essential oils per 3 mL working suspension). Rapid decrease (>3 log) of bacteria number was observed with rosemary oil within 100 minutes (in concentration 3 drops of essential oils per 3 mL working suspension). Thus, a method for improving the antimicrobial performance of essential oil in low concentrations was developed. However, it still remains to be investigated how bacteria get killed by the EOs treated by an electromagnetic field. The possible mechanisms relies on alteration in the permeability of ionic channels in ionic channels in the bacterial cell walls that transport in the cells was proposed. For further studies, it is proposed to examine other types of essential oils and other antibiotic-resistant bacteria (ARB), which are causing a serious concern throughout the world.

Keywords: rotating magnetic field, rosemary, thyme, essential oils, Escherichia coli

Procedia PDF Downloads 158
171 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories

Authors: Oibar Martinez, Clara Oliver

Abstract:

The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.

Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations

Procedia PDF Downloads 110
170 Carbon Capture and Storage by Continuous Production of CO₂ Hydrates Using a Network Mixing Technology

Authors: João Costa, Francisco Albuquerque, Ricardo J. Santos, Madalena M. Dias, José Carlos B. Lopes, Marcelo Costa

Abstract:

Nowadays, it is well recognized that carbon dioxide emissions, together with other greenhouse gases, are responsible for the dramatic climate changes that have been occurring over the past decades. Gas hydrates are currently seen as a promising and disruptive set of materials that can be used as a basis for developing new technologies for CO₂ capture and storage. Its potential as a clean and safe pathway for CCS is tremendous since it requires only water and gas to be mixed under favorable temperatures and mild high pressures. However, the hydrates formation process is highly exothermic; it releases about 2 MJ per kilogram of CO₂, and it only occurs in a narrow window of operational temperatures (0 - 10 °C) and pressures (15 to 40 bar). Efficient continuous hydrate production at a specific temperature range necessitates high heat transfer rates in mixing processes. Past technologies often struggled to meet this requirement, resulting in low productivity or extended mixing/contact times due to inadequate heat transfer rates, which consistently posed a limitation. Consequently, there is a need for more effective continuous hydrate production technologies in industrial applications. In this work, a network mixing continuous production technology has been shown to be viable for producing CO₂ hydrates. The structured mixer used throughout this work consists of a network of unit cells comprising mixing chambers interconnected by transport channels. These mixing features result in enhanced heat and mass transfer rates and high interfacial surface area. The mixer capacity emerges from the fact that, under proper hydrodynamic conditions, the flow inside the mixing chambers becomes fully chaotic and self-sustained oscillatory flow, inducing intense local laminar mixing. The device presents specific heat transfer rates ranging from 107 to 108 W⋅m⁻³⋅K⁻¹. A laboratory scale pilot installation was built using a device capable of continuously capturing 1 kg⋅h⁻¹ of CO₂, in an aqueous slurry of up to 20% in mass. The strong mixing intensity has proven to be sufficient to enhance dissolution and initiate hydrate crystallization without the need for external seeding mechanisms and to achieve, at the device outlet, conversions of 99% in CO₂. CO₂ dissolution experiments revealed that the overall liquid mass transfer coefficient is orders of magnitude larger than in similar devices with the same purpose, ranging from 1 000 to 12 000 h⁻¹. The present technology has shown itself to be capable of continuously producing CO₂ hydrates. Furthermore, the modular characteristics of the technology, where scalability is straightforward, underline the potential development of a modular hydrate-based CO₂ capture process for large-scale applications.

Keywords: network, mixing, hydrates, continuous process, carbon dioxide

Procedia PDF Downloads 52
169 Enhancing Students’ Academic Engagement in Mathematics through a “Concept+Language Mapping” Approach

Authors: Jodie Lee, Lorena Chan, Esther Tong

Abstract:

Hong Kong students face a unique learning environment. Starting from the 2010/2011 school year, The Education Bureau (EDB) of the Government of the Hong Kong Special Administrative Region implemented the fine-tuned Medium of Instruction (MOI) arrangements for secondary schools. Since then, secondary schools in Hong Kong have been given the flexibility to decide the most appropriate MOI arrangements for their schools and under the new academic structure for senior secondary education, particularly on the compulsory part of the mathematics curriculum. In 2019, Hong Kong Diploma of Secondary Education Examination (HKDSE), over 40% of school day candidates attempted the Mathematics Compulsory Part examination in the Chinese version while the rest took the English version. Moreover, only 14.38% of candidates sat for one of the extended Mathematics modules. This results in a serious of intricate issues to students’ learning in post-secondary education programmes. It is worth to note that when students further pursue to an higher education in Hong Kong or even oversea, they may facing substantial difficulties in transiting learning from learning mathematics in their mother tongue in Chinese-medium instruction (CMI) secondary schools to an English-medium learning environment. Some students understood the mathematics concepts were found to fail to fulfill the course requirements at college or university due to their learning experience in secondary study at CMI. They are particularly weak in comprehending the mathematics questions when they are doing their assessment or attempting the test/examination. A government funded project was conducted with the aims of providing integrated learning context and language support to students with a lower level of numeracy and/or with CMI learning experience. By introducing this “integrated concept + language mapping approach”, students can cope with the learning challenges in the compulsory English-medium mathematics and statistics subjects in their tertiary education. Ultimately, in the hope that students can enhance their mathematical ability, analytical skills, and numerical sense for their lifelong learning. The “Concept + Language Mapping “(CLM) approach was adopted and tried out in the bridging courses for students with a lower level of numeracy and/or with CMI learning experiences. At the beginning of each class, a pre-test was conducted, and class time was then devoted to introducing the concepts by CLM approach. For each concept, the key thematic items and their different semantic relations are presented using graphics and animations via the CLM approach. At the end of each class, a post-test was conducted. Quantitative data analysis was performed to study the effect on students’ learning via the CLM approach. Stakeholders' feedbacks were collected to estimate the effectiveness of the CLM approach in facilitating both content and language learning. The results based on both students’ and lecturers’ feedback indicated positive outcomes on adopting the CLM approach to enhance the mathematical ability and analytical skills of CMI students.

Keywords: mathematics, Concept+Language Mapping, level of numeracy, medium of instruction

Procedia PDF Downloads 83