Search results for: ground tests
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6224

Search results for: ground tests

884 Inflation and Deflation of Aircraft's Tire with Intelligent Tire Pressure Regulation System

Authors: Masoud Mirzaee, Ghobad Behzadi Pour

Abstract:

An aircraft tire is designed to tolerate extremely heavy loads for a short duration. The number of tires increases with the weight of the aircraft, as it is needed to be distributed more evenly. Generally, aircraft tires work at high pressure, up to 200 psi (14 bar; 1,400 kPa) for airliners and higher for business jets. Tire assemblies for most aircraft categories provide a recommendation of compressed nitrogen that supports the aircraft’s weight on the ground, including a mechanism for controlling the aircraft during taxi, takeoff; landing; and traction for braking. Accurate tire pressure is a key factor that enables tire assemblies to perform reliably under high static and dynamic loads. Concerning ambient temperature change, considering the condition in which the temperature between the origin and destination airport was different, tire pressure should be adjusted and inflated to the specified operating pressure at the colder airport. This adjustment superseding the normal tire over an inflation limit of 5 percent at constant ambient temperature is required because the inflation pressure remains constant to support the load of a specified aircraft configuration. On the other hand, without this adjustment, a tire assembly would be significantly under/over-inflated at the destination. Due to an increase of human errors in the aviation industry, exorbitant costs are imposed on the airlines for providing consumable parts such as aircraft tires. The existence of an intelligent system to adjust the aircraft tire pressure based on weight, load, temperature, and weather conditions of origin and destination airports, could have a significant effect on reducing the aircraft maintenance costs, aircraft fuel and further improving the environmental issues related to the air pollution. An intelligent tire pressure regulation system (ITPRS) contains a processing computer, a nitrogen bottle with 1800 psi, and distribution lines. Nitrogen bottle’s inlet and outlet valves are installed in the main wheel landing gear’s area and are connected through nitrogen lines to main wheels and nose wheels assy. Controlling and monitoring of nitrogen will be performed by a computer, which is adjusted according to the calculations of received parameters, including the temperature of origin and destination airport, the weight of cargo loads and passengers, fuel quantity, and wind direction. Correct tire inflation and deflation are essential in assuring that tires can withstand the centrifugal forces and heat of normal operations, with an adequate margin of safety for unusual operating conditions such as rejected takeoff and hard landings. ITPRS will increase the performance of the aircraft in all phases of takeoff, landing, and taxi. Moreover, this system will reduce human errors, consumption materials, and stresses imposed on the aircraft body.

Keywords: avionic system, improve efficiency, ITPRS, human error, reduced cost, tire pressure

Procedia PDF Downloads 225
883 The Harmonious Blend of Digitalization and 3D Printing: Advancing Aerospace Jet Pump Development

Authors: Subrata Sarkar

Abstract:

The aerospace industry is experiencing a profound product development transformation driven by the powerful integration of digitalization and 3D printing technologies. This paper delves into the significant impact of this convergence on aerospace innovation, specifically focusing on developing jet pumps for fuel systems. This case study is a compelling example of the immense potential of these technologies. In response to the industry's increasing demand for lighter, more efficient, and customized components, the combined capabilities of digitalization and 3D printing are reshaping how we envision, design, and manufacture critical aircraft parts, offering a distinct paradigm in aerospace engineering. Consider the development of a jet pump for a fuel system, a task that presents unique and complex challenges. Despite its seemingly simple design, the jet pump's development is hindered by many demanding operating conditions. The qualification process for these pumps involves many analyses and tests, leading to substantial delays and increased costs in fuel system development. However, by harnessing the power of automated simulations and integrating legacy design, manufacturing, and test data through digitalization, we can optimize the jet pump's design and performance, thereby revolutionizing product development. Furthermore, 3D printing's ability to create intricate structures using various materials, from lightweight polymers to high-strength alloys, holds the promise of highly efficient and durable jet pumps. The combined impact of digitalization and 3D printing extends beyond design, as it also reduces material waste and advances sustainability goals, aligning with the industry's increasing commitment to environmental responsibility. In conclusion, the convergence of digitalization and 3D printing is not just a technological advancement but a gateway to a new era in aerospace product development, particularly in the design of jet pumps. This revolution promises to redefine how we create aerospace components, making them safer, more efficient, and environmentally responsible. As we stand at the forefront of this technological revolution, aerospace companies must embrace these technologies as a choice and a strategic imperative for those striving to lead in innovation and sustainability in the 21st century.

Keywords: jet pump, digitalization, 3D printing, aircraft fuel system.

Procedia PDF Downloads 35
882 Investigating Visual Statistical Learning during Aging Using the Eye-Tracking Method

Authors: Zahra Kazemi Saleh, Bénédicte Poulin-Charronnat, Annie Vinter

Abstract:

This study examines the effects of aging on visual statistical learning, using eye-tracking techniques to investigate this cognitive phenomenon. Visual statistical learning is a fundamental brain function that enables the automatic and implicit recognition, processing, and internalization of environmental patterns over time. Some previous research has suggested the robustness of this learning mechanism throughout the aging process, underscoring its importance in the context of education and rehabilitation for the elderly. The study included three distinct groups of participants, including 21 young adults (Mage: 19.73), 20 young-old adults (Mage: 67.22), and 17 old-old adults (Mage: 79.34). Participants were exposed to a series of 12 arbitrary black shapes organized into 6 pairs, each with different spatial configurations and orientations (horizontal, vertical, and oblique). These pairs were not explicitly revealed to the participants, who were instructed to passively observe 144 grids presented sequentially on the screen for a total duration of 7 min. In the subsequent test phase, participants performed a two-alternative forced-choice task in which they had to identify the most familiar pair from 48 trials, each consisting of a base pair and a non-base pair. Behavioral analysis using t-tests revealed notable findings. The mean score for the first group was significantly above chance, indicating the presence of visual statistical learning. Similarly, the second group also performed significantly above chance, confirming the persistence of visual statistical learning in young-old adults. Conversely, the third group, consisting of old-old adults, showed a mean score that was not significantly above chance. This lack of statistical learning in the old-old adult group suggests a decline in this cognitive ability with age. Preliminary eye-tracking results showed a decrease in the number and duration of fixations during the exposure phase for all groups. The main difference was that older participants focused more often on empty cases than younger participants, likely due to a decline in the ability to ignore irrelevant information, resulting in a decrease in statistical learning performance.

Keywords: aging, eye tracking, implicit learning, visual statistical learning

Procedia PDF Downloads 60
881 Globalisation, Growth and Sustainability in Sub-Saharan Africa

Authors: Ourvashi Bissoon

Abstract:

Sub-Saharan Africa in addition to being resource rich is increasingly being seen as having a huge growth potential and as a result, is increasingly attracting MNEs on its soil. To empirically assess the effectiveness of GDP in tracking sustainable resource use and the role played by MNEs in Sub-Saharan Africa, a panel data analysis has been undertaken for 32 countries over thirty-five years. The time horizon spans the period 1980-2014 to reflect the evolution from before the publication of the pioneering Brundtland report on sustainable development to date. Multinationals’ presence is proxied by the level of FDI stocks. The empirical investigation first focuses on the impact of trade openness and MNE presence on the traditional measure of economic growth namely the GDP growth rate, and then on the genuine savings (GS) rate, a measure of weak sustainability developed by the World Bank, which assumes the substitutability between different forms of capital and finally, the impact on the adjusted Net National Income (aNNI), a measure of green growth which caters for the depletion of natural resources is examined. For countries with significant exhaustible natural resources and important foreign investor presence, the adjusted net national income (aNNI) can be a better indicator of economic performance than GDP growth (World Bank, 2010). The issue of potential endogeneity and reverse causality is also addressed in addition to robustness tests. The findings indicate that FDI and openness contribute significantly and positively to the GDP growth of the countries in the sample; however there is a threshold level of institutional quality below which FDI has a negative impact on growth. When the GDP growth rate is substituted for the GS rate, a natural resource curse becomes evident. The rents being generated from the exploitation of natural resources are not being re-invested into other forms of capital namely human and physical capital. FDI and trade patterns may be setting the economies in the sample on a unsustainable path of resource depletion. The resource curse is confirmed when utilising the aNNI as well, thus implying that GDP growth measure may not be a reliable to capture sustainable development.

Keywords: FDI, sustainable development, genuine savings, sub-Saharan Africa

Procedia PDF Downloads 200
880 The Predictive Power of Successful Scientific Theories: An Explanatory Study on Their Substantive Ontologies through Theoretical Change

Authors: Damian Islas

Abstract:

Debates on realism in science concern two different questions: (I) whether the unobservable entities posited by theories can be known; and (II) whether any knowledge we have of them is objective or not. Question (I) arises from the doubt that since observation is the basis of all our factual knowledge, unobservable entities cannot be known. Question (II) arises from the doubt that since scientific representations are inextricably laden with the subjective, idiosyncratic, and a priori features of human cognition and scientific practice, they cannot convey any reliable information on how their objects are in themselves. A way of understanding scientific realism (SR) is through three lines of inquiry: ontological, semantic, and epistemological. Ontologically, scientific realism asserts the existence of a world independent of human mind. Semantically, scientific realism assumes that theoretical claims about reality show truth values and, thus, should be construed literally. Epistemologically, scientific realism believes that theoretical claims offer us knowledge of the world. Nowadays, the literature on scientific realism has proceeded rather far beyond the realism versus antirealism debate. This stance represents a middle-ground position between the two according to which science can attain justified true beliefs concerning relational facts about the unobservable realm but cannot attain justified true beliefs concerning the intrinsic nature of any objects occupying that realm. That is, the structural content of scientific theories about the unobservable can be known, but facts about the intrinsic nature of the entities that figure as place-holders in those structures cannot be known. There are two possible versions of SR: Epistemological Structural Realism (ESR) and Ontic Structural Realism (OSR). On ESR, an agnostic stance is preserved with respect to the natures of unobservable entities, but the possibility of knowing the relations obtaining between those entities is affirmed. OSR includes the rather striking claim that when it comes to the unobservables theorized about within fundamental physics, relations exist, but objects do not. Focusing on ESR, questions arise concerning its ability to explain the empirical success of a theory. Empirical success certainly involves predictive success, and predictive success implies a theory’s power to make accurate predictions. But a theory’s power to make any predictions at all seems to derive precisely from its core axioms or laws concerning unobservable entities and mechanisms, and not simply the sort of structural relations often expressed in equations. The specific challenge to ESR concerns its ability to explain the explanatory and predictive power of successful theories without appealing to their substantive ontologies, which are often not preserved by their successors. The response to this challenge will depend on the various and subtle different versions of ESR and OSR stances, which show a sort of progression through eliminativist OSR to moderate OSR of gradual increase in the ontological status accorded to objects. Knowing the relations between unobserved entities is methodologically identical to assert that these relations between unobserved entities exist.

Keywords: eliminativist ontic structural realism, epistemological structuralism, moderate ontic structural realism, ontic structuralism

Procedia PDF Downloads 103
879 Absence of Malignancy in Oral Epithelial Cells from Individuals Occupationally Exposed to Organic Solvents Working in the Shoe Industry

Authors: B. González-Yebra, B. Flores-Nieto, P. Aguilar-Salinas, M. Preciado Puga, A. L. González Yebra

Abstract:

The monitoring of populations occupationally exposed to organic solvents has been an important issue for several shoe factories for years since the International Agency for Research on Cancer (IARC) has advised on the potential carcinogenic risk of chemicals related to occupations. In order to detect if exposition to organic solvents used in some Mexican shoe factories contributes to oral carcinogenesis, we performed monitoring in three factories. Occupational exposure was determined by using monitors 3M. Organic solvents were assessed by gas chromatography. Then, we recruited 30 shoe workers (30.2 ± 8.4 years) and 10 unexposed subjects (43.3 ± 11.2 years) for the micronuclei (MN) test and immunodetection of some cancer biomarkers (ki-67, p16, caspase-3) in scraped oral epithelial cells. Monitored solvents detected were acetone, benzene, hexane, methyl ethyl ketone, and toluene in acceptable levels according to Official Mexican Norm. We found by MN test higher incidence of nuclear abnormalities (karyorrhexis, pycnosis, karyolysis, condensed chromatin, and macronuclei) in the exposed group than the non-exposed group. On the other hand, we found, a negative expression for Ki-67 and p16 in exfoliated epithelial cells from exposed and non-exposed to organic solvents subjects. Only caspase-3 shown positive patter of expression in 9/30 (30%) exposed subjects, and we detected high karyolysis incidence in caspase-3 subjects (p = 0.021). The absence of expression of proliferation markers p16 and ki-67 and presence of apoptosis marker caspase-3 are indicating the absence of malignancy in oral epithelial cells and low risk for oral cancer. It is a fact that the MN test is a very effective method to detect nuclear abnormalities in exfoliated buccal cells from subjects that have been exposed to organic solvents in the shoe industry. However, in order to improve this tool and predict cancer risk is it is mandatory to implement complementary tests as other biomarkers that can help to detect malignancy in individuals occupationally exposed.

Keywords: biomarkers, oral cancer, organic solvents, shoe industries

Procedia PDF Downloads 117
878 Production and Characterization of Biochars from Torrefaction of Biomass

Authors: Serdar Yaman, Hanzade Haykiri-Acma

Abstract:

Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.

Keywords: biochar, biomass, fuel upgrade, torrefaction

Procedia PDF Downloads 360
877 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 243
876 Exercise in Extreme Conditions: Leg Cooling and Fat/Carbohydrate Utilization

Authors: Anastasios Rodis

Abstract:

Background: Case studies of walkers, climbers, and campers exposed to cold and wet conditions without limb water/windproof protection revealed experiences of muscle weakness and fatigue. It is reasonable to assume that a part of the fatigue could occur due to an alteration in substrate utilization, since reduction of performance in extreme cold conditions, may partially be explained by higher anaerobic glycolysis, reflecting higher carbohydrate oxidation and an increase accumulation rate of blood lactate. The aim of this study was to assess the effects of pre-exercise lower limb cooling on substrate utilization rate during sub-maximal exercise. Method: Six male university students (mean (SD): age, 21.3 (1.0) yr; maximal oxygen uptake (V0₂ max), 49.6 (3.6) ml.min⁻¹; and percentage of body fat, 13.6 (2.5) % were examined in random order after either 30min cold water (12°C) immersion utilized as the cooling strategy up to the gluteal fold, or under control conditions (no precooling), with tests separated by minimum of 7 days. Exercise consisted of 60min cycling at 50% V0₂ max, in a thermoneutral environment of 20°C. Subjects were also required to record a diet diary over the 24hrs prior to the each trial. Means (SD) for the three macronutrients during the 1 day prior to each trial (expressed as a percentage of total energy) 52 (3) % carbohydrate, 31 (4) % fat, and 17 (± 2) % protein. Results: The following responses to lower limb cooling relative to control trial during exercise were: 1) Carbohydrate (CHO) oxidation, and blood lactate (Bₗₐc) concentration were significantly higher (P < 0.05); 2) rectal temperature (Tᵣₑc) was significantly higher (P < 0.05), but skin temperature was significantly lower (P < 0.05); no significant differences were found in blood glucose (Bg), heart rate (HR) and oxygen consumption (V0₂). Discussion: These data suggested that lower limb cooling prior to submaximal exercise will shift metabolic processes from Fat oxidation to CHO oxidation. This shift from Fat to CHO oxidation will probably have important implications in the surviving scenario, since people facing accidental localized cooling of their limbs either through wading/falling in cold water or snow even if they do not perform high intensity activity, they have to rely on CHO availability.

Keywords: exercise in wet conditions, leg cooling, outdoors exercise, substrate utilization

Procedia PDF Downloads 427
875 The Correlation between Eye Movements, Attentional Shifting, and Driving Simulator Performance among Adolescents with Attention Deficit Hyperactivity Disorder

Authors: Navah Z. Ratzon, Anat Keren, Shlomit Y. Greenberg

Abstract:

Car accidents are a problem worldwide. Adolescents’ involvement in car accidents is higher in comparison to the overall driving population. Researchers estimate the risk of accidents among adolescents with symptoms of attention-deficit/hyperactivity disorder (ADHD) to be 1.2 to 4 times higher than that of their peers. Individuals with ADHD exhibit unique patterns of eye movements and attentional shifts that play an important role in driving. In addition, deficiencies in cognitive and executive functions among adolescents with ADHD is likely to put them at greater risk for car accidents. Fifteen adolescents with ADHD and 17 matched controls participated in the study. Individuals from both groups attended local public schools and did not have a driver’s license. Participants’ mean age was 16.1 (SD=.23). As part of the experiment, they all completed a driving simulation session, while their eye movements were monitored. Data were recorded by an eye tracker: The entire driving session was recorded, registering the tester’s exact gaze position directly on the screen. Eye movements and simulator data were analyzed using Matlab (Mathworks, USA). Participants’ cognitive and metacognitive abilities were evaluated as well. No correlation was found between saccade properties, regions of interest, and simulator performance in either group, although participants with ADHD allocated more visual scan time (25%, SD = .13%) to a smaller segment of dashboard area, whereas controls scanned the monitor more evenly (15%, SD = .05%). The visual scan pattern found among participants with ADHD indicates a distinct pattern of engagement-disengagement of spatial attention compared to that of non-ADHD participants as well as lower attention flexibility, which likely affects driving. Additionally the lower the results on the cognitive tests, the worse driving performance was. None of the participants had prior driving experience, yet participants with ADHD distinctly demonstrated difficulties in scanning their surroundings, which may impair driving. This stresses the need to consider intervention programs, before driving lessons begin, to help adolescents with ADHD acquire proper driving habits, avoid typical driving errors, and achieve safer driving.

Keywords: ADHD, attentional shifting, driving simulator, eye movements

Procedia PDF Downloads 305
874 Intriguing Modulations in the Excited State Intramolecular Proton Transfer Process of Chrysazine Governed by Host-Guest Interactions with Macrocyclic Molecules

Authors: Poojan Gharat, Haridas Pal, Sharmistha Dutta Choudhury

Abstract:

Tuning photophysical properties of guest dyes through host-guest interactions involving macrocyclic hosts are the attractive research areas since past few decades, as these changes can directly be implemented in chemical sensing, molecular recognition, fluorescence imaging and dye laser applications. Excited state intramolecular proton transfer (ESIPT) is an intramolecular prototautomerization process display by some specific dyes. The process is quite amenable to tunability by the presence of different macrocyclic hosts. The present study explores the interesting effect of p-sulfonatocalix[n]arene (SCXn) and cyclodextrin (CD) hosts on the excited-state prototautomeric equilibrium of Chrysazine (CZ), a model antitumour drug. CZ exists exclusively in its normal form (N) in the ground state. However, in the excited state, the excited N* form undergoes ESIPT along with its pre-existing intramolecular hydrogen bonds, giving the excited state prototautomer (T*). Accordingly, CZ shows a single absorption band due to N form, but two emission bands due to N* and T* forms. Facile prototautomerization of CZ is considerably inhibited when the dye gets bound to SCXn hosts. However, in spite of lower binding affinity, the inhibition is more profound with SCX6 host as compared to SCX4 host. For CD-CZ system, while prototautomerization process is hindered by the presence of β-CD, it remains unaffected in the presence of γCD. Reduction in the prototautomerization process of CZ by SCXn and βCD hosts is unusual, because T* form is less dipolar in nature than the N*, hence binding of CZ within relatively hydrophobic hosts cavities should have enhanced the prototautomerization process. At the same time, considering the similar chemical nature of two CD hosts, their effect on prototautomerization process of CZ would have also been similar. The atypical effects on the prototautomerization process of CZ by the studied hosts are suggested to arise due to the partial inclusion or external binding of CZ with the hosts. As a result, there is a strong possibility of intermolecular H-bonding interaction between CZ dye and the functional groups present at the portals of SCXn and βCD hosts. Formation of these intermolecular H-bonds effectively causes the pre-existing intramolecular H-bonding network within CZ molecule to become weak, and this consequently reduces the prototautomerization process for the dye. Our results suggest that rather than the binding affinity between the dye and host, it is the orientation of CZ in the case of SCXn-CZ complexes and the binding stoichiometry in the case of CD-CZ complexes that play the predominant role in influencing the prototautomeric equilibrium of the dye CZ. In the case of SCXn-CZ complexes, the results obtained through experimental findings are well supported by quantum chemical calculations. Similarly for CD-CZ systems, binding stoichiometries obtained through geometry optimization studies on the complexes between CZ and CD hosts correlate nicely with the experimental results. Formation of βCD-CZ complexes with 1:1 stoichiometry while formation of γCD-CZ complexes with 1:1, 1:2 and 2:2 stoichiometries are revealed from geometry optimization studies and these results are in good accordance with the observed effects by the βCD and γCD hosts on the ESIPT process of CZ dye.

Keywords: intermolecular proton transfer, macrocyclic hosts, quantum chemical studies, photophysical studies

Procedia PDF Downloads 103
873 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth

Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.

Keywords: treeline, dynamic, climate, modeling

Procedia PDF Downloads 45
872 Evaluation of the Efficacy of Surface Hydrophobisation and Properties of Composite Based on Lime Binder with Flax Fillers

Authors: Stanisław Fic, Danuta Barnat-Hunek, Przemysław Brzyski

Abstract:

The aim of the study was to evaluate the possibility of applying modified lime binder together with natural flax fibers and straw to the production of wall blocks to the usage in energy-efficient construction industry and the development of proposals for technological solutions. The following laboratory tests were performed: the analysis of the physical characteristics of the tested materials (bulk density, total porosity, and thermal conductivity), compressive strength, a water droplet absorption test, water absorption of samples, diffusion of water vapor, and analysis of the structure by using SEM. In addition, the process of surface hydrophobisation was analyzed. In the paper, there was examined the effectiveness of two formulations differing in the degree of hydrolytic polycondensation, viscosity and concentration, as these are the factors that determine the final impregnation effect. Four composites, differing in composition, were executed. Composites, as a result of the presence of flax straw and fibers showed low bulk density in the range from 0.44 to 1.29 kg/m3 and thermal conductivity between 0.13 W/mK and 0.22 W/mK. Compressive strength changed in the range from 0,45 MPa to 0,65 MPa. The analysis of results allowed observing the relationship between the formulas and the physical properties of the composites. The results of the effectiveness of hydrophobisation of composites after 2 days showed a decrease in water absorption. Depending on the formulation, after 2 days, the water absorption ratio WH of composites was from 15 to 92% (effectiveness of hydrophobization was suitably from 8 to 85%). In practice, preparations based on organic solvents often cause sealing of surface, hindering the diffusion of water vapor from materials but studies have shown good water vapor permeability by the hydrophobic silicone coating. The conducted pilot study demonstrated the possibility of applying flax composites. The article shows that the reduction of CO2 which is produced in the building process can be affected by using natural materials for the building components whose quality is not inferior as compared to the materials which are commonly used.

Keywords: ecological construction, flax fibers, hydrophobisation, lime

Procedia PDF Downloads 319
871 Poly(N-Vinylcaprolactam-Co-Itaconic Acid-Co-Ethylene Glycol Dimethacrylate)-Based Microgels Embedded in Chitosan Matrix for Controlled Release of Ketoprofen

Authors: Simone F. Medeiros, Jessica M. Fonseca, Gizelda M. Alves, Danilo M. Santos, Sérgio P. Campana-Filho, Amilton M. Santos

Abstract:

Stimuli responsive and biocompatible hydrogel nanoparticles have gained special attention as systems for potential applications in controlled release of drugs to improve their therapeutic efficacy while minimizing side effects. In this work, novel solid dispersions based on thermo- and pH-responsive poly(N-vinylcaprolactam-co-itaconic acid-co-ethylene- glycol dimethacrylate) hydrogel nanoparticles embedded in chitosan matrices were prepared via spray drying for controlled release of ketoprofen. Firstly, the hydrogel nanoparticles containing ketoprofen were prepared via precipitation polymerization and their stimuli-responsive behavior, thermal properties, chemical composition, encapsulation efficiency and morphology were characterized. Then, hydrogel nanoparticles with different particles size were embedded into chitosan matrices via spray-drying. Scanning electron microscopy (SEM) analyses were performed to investigate the particles size, dispersity and morphology. Finally, ketoprofen release profiles were studied as a function of pH and temperature. Chitosan/poly(NVCL-co-IA-co-EGDMA)-ketoprofen microparticles presented spherical shape, rough surface and pronounced agglomeration, indicating that hydrogels nanoparticles loaded with ketoprofen modified the surface of chitosan matrix. The maximum encapsulation efficiency of ketoprofen into hydrogel nanoparticles was 57.8% and the electrostatic interactions between amino groups from chitosan and carboxylic groups from hydrogel nanoparticles were able to control ketoprofen release. The hydrogel nanoparticles themselves were capable to retard the release of ketoprofen-loaded until 48h of in vitro release tests, while their incorporation into chitosan matrix achieved a maximum percentage of drug release of 45%, using a mass ratio of chitosan: poly(NVCL-co-IA-co-EGDMA equal to 10:7, and 69%, using a mass ratio of chitosan: poly(NVCL-co-IA-co-EGDMA equal to 5:2.

Keywords: hydrogel nanoparticles, poly(N-vinylcaprolactam-co-itaconic acid-co-ethylene- glycol dimethacrylate), chitosan, ketoprofen, spray-drying

Procedia PDF Downloads 245
870 Reliability and Availability Analysis of Satellite Data Reception System using Reliability Modeling

Authors: Ch. Sridevi, S. P. Shailender Kumar, B. Gurudayal, A. Chalapathi Rao, K. Koteswara Rao, P. Srinivasulu

Abstract:

System reliability and system availability evaluation plays a crucial role in ensuring the seamless operation of complex satellite data reception system with consistent performance for longer periods. This paper presents a novel approach for the same using a case study on one of the antenna systems at satellite data reception ground station in India. The methodology involves analyzing system's components, their failure rates, system's architecture, generation of logical reliability block diagram model and estimating the reliability of the system using the component level mean time between failures considering exponential distribution to derive a baseline estimate of the system's reliability. The model is then validated with collected system level field failure data from the operational satellite data reception systems that includes failure occurred, failure time, criticality of the failure and repair times by using statistical techniques like median rank, regression and Weibull analysis to extract meaningful insights regarding failure patterns and practical reliability of the system and to assess the accuracy of the developed reliability model. The study mainly focused on identification of critical units within the system, which are prone to failures and have a significant impact on overall performance and brought out a reliability model of the identified critical unit. This model takes into account the interdependencies among system components and their impact on overall system reliability and provides valuable insights into the performance of the system to understand the Improvement or degradation of the system over a period of time and will be the vital input to arrive at the optimized design for future development. It also provides a plug and play framework to understand the effect on performance of the system in case of any up gradations or new designs of the unit. It helps in effective planning and formulating contingency plans to address potential system failures, ensuring the continuity of operations. Furthermore, to instill confidence in system users, the duration for which the system can operate continuously with the desired level of 3 sigma reliability was estimated that turned out to be a vital input to maintenance plan. System availability and station availability was also assessed by considering scenarios of clash and non-clash to determine the overall system performance and potential bottlenecks. Overall, this paper establishes a comprehensive methodology for reliability and availability analysis of complex satellite data reception systems. The results derived from this approach facilitate effective planning contingency measures, and provide users with confidence in system performance and enables decision-makers to make informed choices about system maintenance, upgrades and replacements. It also aids in identifying critical units and assessing system availability in various scenarios and helps in minimizing downtime and optimizing resource allocation.

Keywords: exponential distribution, reliability modeling, reliability block diagram, satellite data reception system, system availability, weibull analysis

Procedia PDF Downloads 69
869 Geological, Engineering Geological, and Hydrogeological Characteristics of the Knowledge Economic City, Al Madinah Al Munawarah, KSA

Authors: Mutasim A. M. Ez Eldin, Tareq Saeid Al Zahrani, Gabel Zamil Al-Barakati, Ibrahim Mohamed AlHarthi, Marwan Mohamed Al Saikhan, Waleed Abdel Aziz Al Aklouk, Waheed Mohamed Saeid Ba Amer

Abstract:

The Knowledge Economic City (KEC) of Al Madinah Al Munawarah is one of the major projects and represents a cornerstone for the new development activities for Al Madinah. The study area contains different geological units dominated by basalt and overlain by surface deposits. The surface soils vary in thickness and can be classified into well-graded SAND with silt and gravel (SW-SM), silty SAND with gravel (SM), silty GRAVEL with sand (GM), and sandy SILTY clay (CL-ML). The subsurface soil obtained from the drilled boreholes can be classified into poorly graded GRAVEL (GP), well-graded GRAVEL with sand (GW), poorly graded GRAVEL with silt (GP-GM), silty CLAYEY gravel with sand (GC-GM), silty SAND with gravel (SM), silt with SAND (ML), and silty CLAY with sand (CL-ML), sandy lean CLAY (CL), and lean CLAY (CL). The relative density of the deposit and the different gravel sizes intercalated with the soil influenced the Standard Penetration Tests (SPT) values. The SPT N values are high and approach refusal even at shallow depths. The shallow refusal depth (0.10 to 0.90m) of the Dynamic Cone Penetration Test (DCPT) was observed. Generally, the soil can be described as inactive with low plasticity and dense to very dense consistency. The basalt of the KEC site is characterized by slightly (W2) to moderately (W3) weathering, their strength ranges from moderate (S4) to very strong (S2), and the Rock Quality Designation (RQD) ranges from very poor (R5) to excellent (R1). The engineering geological map of the KEC characterized the geoengineering properties of the soil and rock materials and classified them into many zones. The high sulphate (SO₄²⁻) and chloride (Cl⁻) contents in groundwater call for protective measures for foundation concrete. The current study revealed that geohazard(s) mitigation measures concerning floods, volcanic eruptions, and earthquakes should be taken into consideration.

Keywords: engineering geology, KEC, petrographic description, rock and soil investigations

Procedia PDF Downloads 64
868 Supercritical Hydrothermal and Subcritical Glycolysis Conversion of Biomass Waste to Produce Biofuel and High-Value Products

Authors: Chiu-Hsuan Lee, Min-Hao Yuan, Kun-Cheng Lin, Qiao-Yin Tsai, Yun-Jie Lu, Yi-Jhen Wang, Hsin-Yi Lin, Chih-Hua Hsu, Jia-Rong Jhou, Si-Ying Li, Yi-Hung Chen, Je-Lueng Shie

Abstract:

Raw food waste has a high-water content. If it is incinerated, it will increase the cost of treatment. Therefore, composting or energy is usually used. There are mature technologies for composting food waste. Odor, wastewater, and other problems are serious, but the output of compost products is limited. And bakelite is mainly used in the manufacturing of integrated circuit boards. It is hard to directly recycle and reuse due to its hard structure and also difficult to incinerate and produce air pollutants due to incomplete incineration. In this study, supercritical hydrothermal and subcritical glycolysis thermal conversion technology is used to convert biomass wastes of bakelite and raw kitchen wastes to carbon materials and biofuels. Batch carbonization tests are performed under high temperature and pressure conditions of solvents and different operating conditions, including wet and dry base mixed biomass. This study can be divided into two parts. In the first part, bakelite waste is performed as dry-based industrial waste. And in the second part, raw kitchen wastes (lemon, banana, watermelon, and pineapple peel) are used as wet-based biomass ones. The parameters include reaction temperature, reaction time, mass-to-solvent ratio, and volume filling rates. The yield, conversion, and recovery rates of products (solid, gas, and liquid) are evaluated and discussed. The results explore the benefits of synergistic effects in thermal glycolysis dehydration and carbonization on the yield and recovery rate of solid products. The purpose is to obtain the optimum operating conditions. This technology is a biomass-negative carbon technology (BNCT); if it is combined with carbon capture and storage (BECCS), it can provide a new direction for 2050 net zero carbon dioxide emissions (NZCDE).

Keywords: biochar, raw food waste, bakelite, supercritical hydrothermal, subcritical glycolysis, biofuels

Procedia PDF Downloads 160
867 Whole Exome Sequencing in Characterizing Mysterious Crippling Disorder in India

Authors: Swarkar Sharma, Ekta Rai, Ankit Mahajan, Parvinder Kumar, Manoj K Dhar, Sushil Razdan, Kumarasamy Thangaraj, Carol Wise, Shiro Ikegawa M.D., K.K. Pandita M.D.

Abstract:

Rare disorders are poorly understood hence, remain uncharacterized or patients are misdiagnosed and get poor medical attention. A rare mysterious skeletal disorder that remained unidentified for decades and rendered many people physically challenged and disabled for life has been reported in an isolated remote village ‘Arai’ of Poonch district of Jammu and Kashmir. This village is located deep in mountains and the population residing in the region is highly consanguineous. In our survey of the region, 70 affected people were reported, showing similar phenotype, in the village with a population of approximately 5000 individuals. We were able to collect samples from two multi generational extended families from the village. Through Whole Exome sequencing (WES), we identified a rare variation NM_003880.3:c.156C>A NP_003871.1:p.Cys52Ter, which results in introduction of premature stop codon in WISP3 gene. We found this variation perfectly segregating with the disease in one of the family. However, this variation was absent in other family. Interestingly, a novel splice site mutation at position c.643+1G>A of WISP3 gene, perfectly segregating with the disease was observed in the second family. Thus, exploiting WES and putting different evidences together (familial histories and genetic data, clinical features, radiological and biochemical tests and findings), the disease has finally been diagnosed as a very rare recessive hereditary skeletal disease “Progressive Pseudorheumatoid Arthropathy of Childhood” (PPAC) also known as “Spondyloepiphyseal Dysplasia Tarda with Progressive Arthropathy” (SEDT-PA). This genetic characterization and identification of the disease causing mutations will aid in genetic counseling, critically required to curb this rare disorder and to prevent its appearance in future generations in the population. Further, understanding of the role of WISP3 gene the biological pathways should help in developing treatment for the disorder.

Keywords: whole exome sequencing, Next Generation Sequencing, rare disorders

Procedia PDF Downloads 398
866 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 119
865 Parametric Study of a Washing Machine to Develop an Energy Efficient Program Regarding the Enhanced Washing Efficiency Index and Micro Organism Removal Performance

Authors: Peli̇n Yilmaz, Gi̇zemnur Yildiz Uysal, Emi̇ne Bi̇rci̇, Berk Özcan, Burak Koca, Ehsan Tuzcuoğlu, Fati̇h Kasap

Abstract:

Development of Energy Efficient Programs (EEP) is one of the most significant trends in the wet appliance industry of the recent years. Thanks to the EEP, the energy consumption of a washing machine as one of the most energy-consuming home appliances can shrink considerably, while its washing performance and the textile hygiene should remain almost unchanged. Here in, the goal of the present study is to achieve an optimum EEP algorithm providing excellent textile hygiene results as well as cleaning performance in a domestic washing machine. In this regard, steam-pretreated cold wash approach with a combination of innovative algorithm solution in a relatively short washing cycle duration was implemented. For the parametric study, steam exposure time, washing load, total water consumption, main-washing time, and spinning rpm as the significant parameters affecting the textile hygiene and cleaning performance were investigated within a Design of Experiment study using Minitab 2021 statistical program. For the textile hygiene studies, specific loads containing the contaminated cotton carriers with Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa bacteria were washed. Then, the microbial removal performance of the designed programs was expressed as log reduction calculated as a difference of microbial count per ml of the liquids in which the cotton carriers before and after washing. For the cleaning performance studies, tests were carried out with various types of detergents and EMPA Standard Stain Strip. According to the results, the optimum EEP program provided an excellent hygiene performance of more than 2 log reduction of microorganism and a perfect Washing Efficiency Index (Iw) of 1.035, which is greater than the value specified by EU ecodesign regulation 2019/2023.

Keywords: washing machine, energy efficient programs, hygiene, washing efficiency index, microorganism, escherichia coli, staphylococcus aureus, pseudomonas aeruginosa, laundry

Procedia PDF Downloads 111
864 Effect of the Food Distribution on Household Food Security Status in Iran

Authors: Delaram Ghodsi, Nasrin Omidvar, Hassan Eini-Zinab, Arash Rashidian, Hossein Raghfar

Abstract:

Food supplementary programs are policy approaches that aim to reduce financial barriers to healthy diets and tackle food insecurity. This study aimed to evaluate the effect of the supportive section of Multidisciplinary Supplementary Program for Improvement of Nutritional Status of Children (MuPINSC) on households’ food security status and nutritional status of mothers. MuPINSC is a national integrative program in Iran that distributes supplementary food basket to malnourished or growth retarded children living in low-income families in addition to providing health services, including sanitation, growth monitoring, and empowerment of families. This longitudinal study is part of a comprehensive evaluation of the program. The study participants included 359 mothers of children aged 6 to 72 month under coverage of the supportive section of the program in two provinces of Iran (Semnan and Qazvin). Demographic and economic characteristics of families were assessed by a questionnaire. Data on food security of family was collected by locally adapted Household Food Insecurity Access Scale (HFIAS) at the baseline of the study and six month thereafter. Weight and height of mothers were measured at the baseline and end of the study and mother’s BMI was calculated. Data were analysed, using paired t-test, GEE (Generalized Estimating Equation), and Chi-square tests. Based on the findings, at the baseline, only 4.7% of families were food-secure, while 13.1%, 38.7% and, 43.5% were categorized as mild, moderate and severe food insecure. After six months follow up, the distribution of different levels of food security changed significantly (P<0.001) to 7.9%, 11.6%, 42.6%, and 38%, respectively. At the end of the study, the chance of food insecurity was significantly 20% lower than the beginning (OR=0.796; 0.653-0.971). No significant difference was observed in maternal BMI based on food security (P>0.05). The findings show that the food supplementary program for children improved household food security status in the studied households. Further research is needed to assess other factors that affect the effectiveness of this large scale program on nutritional status and household’s food security.

Keywords: food security, food supplementary program, household, malnourished children

Procedia PDF Downloads 387
863 Li2o Loss of Lithium Niobate Nanocrystals during High-Energy Ball-Milling

Authors: Laura Kocsor, Laszlo Peter, Laszlo Kovacs, Zsolt Kis

Abstract:

The aim of our research is to prepare rare-earth-doped lithium niobate (LiNbO3) nanocrystals, having only a few dopant ions in the focal point of an exciting laser beam. These samples will be used to achieve individual addressing of the dopant ions by light beams in a confocal microscope setup. One method for the preparation of nanocrystalline materials is to reduce the particle size by mechanical grinding. High-energy ball-milling was used in several works to produce nano lithium niobate. Previously, it was reported that dry high-energy ball-milling of lithium niobate in a shaker mill results in the partial reduction of the material, which leads to a balanced formation of bipolarons and polarons yielding gray color together with oxygen release and Li2O segregation on the open surfaces. In the present work we focus on preparing LiNbO3 nanocrystals by high-energy ball-milling using a Fritsch Pulverisette 7 planetary mill. Every ball-milling process was carried out in zirconia vial with zirconia balls of different sizes (from 3 mm to 0.1 mm), wet grinding with water, and the grinding time being less than an hour. Gradually decreasing the ball size to 0.1 mm, an average particle size of about 10 nm could be obtained determined by dynamic light scattering and verified by scanning electron microscopy. High-energy ball-milling resulted in sample darkening evidenced by optical absorption spectroscopy measurements indicating that the material underwent partial reduction. The unwanted lithium oxide loss decreases the Li/Nb ratio in the crystal, strongly influencing the spectroscopic properties of lithium niobate. Zirconia contamination was found in ground samples proved by energy-dispersive X-ray spectroscopy measurements; however, it cannot be explained based on the hardness properties of the materials involved in the ball-milling process. It can be understood taking into account the presence of lithium hydroxide formed the segregated lithium oxide and water during the ball-milling process, through chemically induced abrasion. The quantity of the segregated Li2O was measured by coulometric titration. During the wet milling process in the planetary mill, it was found that the lithium oxide loss increases linearly in the early phase of the milling process, then a saturation of the Li2O loss can be seen. This change goes along with the disappearance of the relatively large particles until a relatively narrow size distribution is achieved in accord with the dynamic light scattering measurements. With the 3 mm ball size and 1100 rpm rotation rate, the mean particle size achieved is 100 nm, and the total Li2O loss is about 1.2 wt.% of the original LiNbO3. Further investigations have been done to minimize the Li2O segregation during the ball-milling process. Since the Li2O loss was observed to increase with the growing total surface of the particles, the influence of ball-milling parameters on its quantity has also been studied.

Keywords: high-energy ball-milling, lithium niobate, mechanochemical reaction, nanocrystals

Procedia PDF Downloads 119
862 Typification and Determination of Antibiotic Resistance Rates of Stenotrophomonas Maltophilia Strains Isolated from Intensive Care Unit Patients in a University Practice and Research Hospital

Authors: Recep Kesli, Gulsah Asik, Cengiz Demir, Onur Turkyilmaz

Abstract:

Objective: Stenotrophomonas maltophilia (S. maltophilia) has recently emerged as an important nosocomial microorganism. Treatment of invasive infections caused by this organism is problematic because this microorganism is usually resistant to a wide range of commonly used antimicrobials. We aimed to evaluate clinical isolates of S. maltophilia in respect to sampling sites and antimicrobial resistant. Method: During a two years period (October 2013 and September 2015) eighteen samples collected from the intensive care unit (ICU) patients hospitalized in Afyon Kocatepe University, ANS Practice and Research Hospital. Identification of the bacteria was determined by conventional methods and automated identification system-VITEK 2 (bio-Mérieux, Marcy l’toile, France). Antibacterial resistance tests were performed by Kirby Bauer disc (Oxoid, England) diffusion method following the recommendations of CLSI. Results: Eighteen S. maltophilia strains were identified as the causative agents of different infections. The main type of infection was lower respiratory tract infection (83,4 %); three patients (16,6 %) had bloodstream infection. While, none of the 18 S. maltophilia strains were found to be resistant against to trimethoprim sulfametaxasole (TMP-SXT) and levofloxacine, eight strains 66.6 % were found to be resistant against ceftazidim. Conclusion: The isolation of S.maltophilia starains resistant to TMP-SXT is vital. In order to prevent or minimize infections due to S. maltophilia such precuations should be utilized: Avoidance of inappropriate antibiotic use, prolonged implementation of foreign devices, reinforcement of hand hygiene practices and the application of appropriate infection control practices. Microbiology laboratories also may play important roles in controlling S. maltophilia infections by monitoring the prevalence, continuously, the provision of local antibiotic resistance paterns data and the performance of synergistic studies also may help to guide appropirate antimicrobial therapy choices.

Keywords: Stenotrophomonas maltophilia, trimethoprim-sulfamethoxazole, antimicrobial resistance, Stenotrophomonas spp.

Procedia PDF Downloads 235
861 Analyses of Copper Nanoparticles Impregnated Wood and Its Fungal Degradation Performance

Authors: María Graciela Aguayo, Laura Reyes, Claudia Oviedo, José Navarrete, Liset Gómez, Hugo Torres

Abstract:

Most wood species used in construction deteriorate when exposed to environmental conditions that favor wood-degrading organisms’ growth. Therefore, chemical protection by impregnation allows more efficient use of forest resources extending the wood useful life. A wood protection treatment which has attracted considerable interest in the scientific community during the last decade is wood impregnation with nano compounds. Radiata pine is the main wood species used in the Chilean construction industry, with total availability of 8 million m³ sawn timber. According to the requirements of the American Wood Protection Association (AWPA) and the Chilean Standards (NCh) radiata pine timber used in construction must be protected due to its low natural durability. In this work, the impregnation with copper nanoparticles (CuNP) was studied in terms of penetration and its protective effect against wood rot fungi. Two concentrations: 1 and 3 g/L of NPCu were applied by impregnation on radiata pine sapwood. Test penetration under AWPA A3-91 standard was carried out, and wood decay tests were performed according to EN 113, with slight modifications. The results of penetration for 1 g/L CuNP showed an irregular total penetration, and the samples impregnated with 3 g/L showed a total penetration with uniform concentration (blue color in all cross sections). The impregnation wood mass losses due to fungal exposure were significantly reduced, regardless of the concentration of the solution or the fungus. In impregnated wood samples, exposure to G. trabeum resulted ML values of 2.70% and 1.19% for 1 g/L and 3 g/L CuNP, respectively, and exposure to P. placenta resulted in 4.02% and 0.70%-ML values for 1 g/L and 3 g/L CuNP, respectively. In this study, the penetration analysis confirmed a uniform distribution inside the wood, and both concentrations were effective against the tested fungi, giving mass loss values lower than 5%. Therefore, future research in wood preservatives should focus on new nanomaterials that are more efficient and environmentally friendly. Acknowledgments: CONICYT FONDEF IDeA I+D 2019, grant number ID19I10122.

Keywords: copper nanoparticles, fungal degradation, radiata pine wood, wood preservation

Procedia PDF Downloads 181
860 Arginase Enzyme Activity in Human Serum as a Marker of Cognitive Function: The Role of Inositol in Combination with Arginine Silicate

Authors: Katie Emerson, Sara Perez-Ojalvo, Jim Komorowski, Danielle Greenberg

Abstract:

The purpose of this study was to evaluate arginase activity levels in response to combinations of an inositol-stabilized arginine silicate (ASI; Nitrosigine®), L-arginine, and Inositol. Arginine acts as a vasodilator that promotes increased blood flow resulting in enhanced delivery of oxygen and nutrients to the brain and other tissues. ASI alone has been shown to improve performance on cognitive tasks. Arginase, found in human serum, catalyzes the conversion of arginine to ornithine and urea, completing the last step in the urea cycle. Decreasing arginase levels maintains arginine and results in increased nitric oxide production. This study aimed to determine the most effective combination of ASI, L-arginine and inositol for minimizing arginase levels and therefore maximize ASI’s effect on cognition. Serum was taken from untreated healthy donors by separation from clotted factors. Arginase activity of serum in the presence or absence of test products was determined (QuantiChrom™, DARG-100, Bioassay Systems, Hayward CA). The remaining ultra-filtrated serum units were harvested and used as the source for the arginase enzyme. ASI alone or combined with varied levels of Inositol were tested as follows: ASI + inositol at 0.25 g, 0.5 g, 0.75 g, or 1.00 g. L-arginine was also tested as a positive control. All tests elicited changes in arginase activity demonstrating the efficacy of the method used. Adding L-arginine to serum from untreated subjects, with or without inositol only had a mild effect. Adding inositol at all levels reduced arginase activity. Adding 0.5 g to the standardized amount of ASI led to the lowest amount of arginase activity as compared to the 0.25g 0.75g or 1.00g doses of inositol or to L-arginine alone. The outcome of this study demonstrates an interaction of the pairing of inositol with ASI on the activity of the enzyme arginase. We found that neither the maximum nor minimum amount of inositol tested in this study led to maximal arginase inhibition. Since the inhibition of arginase activity is desirable for product formulations looking to maintain arginine levels, the most effective amount of inositol was deemed preferred. Subsequent studies suggest this moderate level of inositol in combination with ASI leads to cognitive improvements including reaction time, executive function, and concentration.

Keywords: arginine, inositol, arginase, cognitive benefits

Procedia PDF Downloads 90
859 Establishing a Drug Discovery Platform to Progress Compounds into the Clinic

Authors: Sheraz Gul

Abstract:

The requirements for progressing a compound to clinical trials is well established and relies on the results from in-vitro and in-vivo animal tests to indicate that it is likely to be safe and efficacious when testing in humans. The typical data package required will include demonstrating compound safety, toxicity, bioavailability, pharmacodynamics (potential effects of the compound on body systems) and pharmacokinetics (how the compound is potentially absorbed, distributed, metabolised and eliminated after dosing in humans). If the desired criteria are met and the compound meets the clinical Candidate criteria and is deemed worthy of further development, a submission to regulatory bodies such as the US Food & Drug Administration for an exploratory Investigational New Drug Study can be made. The purpose of this study is to collect data to establish that the compound will not expose humans to unreasonable risks when used in limited, early-stage clinical studies in patients or normal volunteer subjects (Phase I). These studies are also designed to determine the metabolism and pharmacologic actions of the drug in humans, the side effects associated with increasing doses, and, if possible, to gain early evidence on their effectiveness. In order to reach the above goals, we have developed a pre-clinical high throughput Absorption, Distribution, Metabolism and Excretion–Toxicity (ADME–Toxicity) panel of assays to identify compounds that are likely to meet the Lead and Candidate compound acceptance criteria. This panel includes solubility studies in a range of biological fluids, cell viability studies in cancer and primary cell-lines, mitochondrial toxicity, off-target effects (across the kinase, protease, histone deacetylase, phosphodiesterase and GPCR protein families), CYP450 inhibition (5 different CYP450 enzymes), CYP450 induction, cardio-toxicity (hERG) and gene-toxicity. This panel of assays has been applied to multiple compound series developed in a number of projects delivering Lead and clinical Candidates and examples from these will be presented.

Keywords: absorption, distribution, metabolism and excretion–toxicity , drug discovery, food and drug administration , pharmacodynamics

Procedia PDF Downloads 159
858 Clinical Evidence of the Efficacy of ArtiCovid (Artemisia Annua Extract) on Covid-19 Patients in DRC

Authors: Md, MCS, MPH Munyangi Wa Nkola Jerome

Abstract:

The pandemic of COVID-19, a recently discovered contagious respiratory disease called SARS-CoV-2 (Severe Acute Respiratory Syndrome-Coronavirus 2 Majority of people infected with SARS-CoV-2: Asymptomatic or mildly ill 14% of patients will develop severe illness requiring hospitalization and oxygen support, and 5% of these will be transferred to an intensive care unit, Urgent need for new treatments that can be used quickly to avoid transfer of patients to intensive care and death. Objective: To evaluate the clinical activity (efficacy) of ArtiCovid Hypothesis: Administration of 3 times a teaspoon per day by COVID patients (symptomatic, mild, or moderate forms) results in the disappearance of symptoms and improvement of biological parameters (including viral suppression). Clinical efficacy: the disappearance of clinical signs after seven days of treatment; reduction in the rate of patients transferred to intensive care units for mechanical ventilation and a decrease in mortality related to this infection Paraclinical efficacy: improvement of biological parameters (mainly d-dimer, CRP) Virological efficacy: suppression of the viral load after seven days of treatment (control test on the seventh day is negative) Pilot study using a standardized solution based on Artemisia annua (ARTICOVID) Obtaining authorization from the health authorities of the province of Central Kongo Recruitment of volunteer patients, mainly in the Kinkanda HospitalCarrying out tests before and after treatment as well as analyses before and after treatment. The protocol obtained the approval of the ethics committee 50 patients who completed the treatment were aged between 2 and 70 years, with an average age of 36 yearsMore half were male (56%). One in four patients was a health professional (25%) Of the 12 health professionals, 4 were physicians. For those who reported the date of onset of the disease, the average duration between the appearance of the first symptoms and the medical consultation was 5 days. The 50 patients put on ARTICOVID were discharged alive with CRP levels substantially normalizedAfter seven to eight days, the control test came back negative. This pilot study suggests that ARTICOVID may be effective against COVID-19 infection.

Keywords: artiCovid, DRC, Covid-19, SARS_COV_2

Procedia PDF Downloads 104
857 Roles of Tester in Automated World

Authors: Sagar Mahendrakar

Abstract:

Testers' roles have changed dramatically as automation continues to revolutionise the software development lifecycle. There's a general belief that manual testing is becoming outdated with the introduction of advanced testing frameworks and tools. This abstract, however, disproves that notion by examining the complex and dynamic role that testers play in automated environments. In this work, we explore the complex duties that testers have when everything is automated. We contend that although automation increases productivity and simplifies monotonous tasks, it cannot completely replace the cognitive abilities and subject-matter knowledge of human testers. Rather, testers shift their focus to higher-value tasks like creating test strategies, designing test cases, and delving into intricate scenarios that are difficult to automate. We also emphasise the critical role that testers play in guaranteeing the precision, thoroughness, and dependability of automated testing. Testers verify the efficacy of automated scripts and pinpoint areas for improvement through rigorous test planning, execution, and result analysis. They play the role of quality defenders, using their analytical and problem-solving abilities to find minute flaws that computerised tests might miss. Furthermore, the abstract emphasises how testing in automated environments is a collaborative process. In order to match testing efforts with business objectives, improve test automation frameworks, and rank testing tasks according to risk, testers work closely with developers, automation engineers, and other stakeholders. Finally, we discuss how testers in the era of automation need to possess a growing skill set. To stay current, testers need to develop skills in scripting languages, test automation tools, and emerging technologies in addition to traditional testing competencies. Soft skills like teamwork, communication, and flexibility are also essential for productive cooperation in cross-functional teams. This abstract clarifies the ongoing importance of testers in automated settings. Testers can use automation to improve software quality and provide outstanding user experiences by accepting their changing role as strategic partners and advocates for quality.

Keywords: testing, QA, automation, leadership

Procedia PDF Downloads 19
856 Evaluation of the Inhibitory Activity of Natural Extracts From Spontaneous Plant on the Α-Amylase and Α–Glucosidase and Their Antioxidant Activities

Authors: Ihcen Khacheba, Amar Djeridane, Abdelkarim Kamli, Mohamed Yousfi

Abstract:

Plant materials constitute an important source of natural bioactive molecules. Thus plants have been used from antiquity as sources of medicament against various diseases. These properties are usually attributed to secondary metabolites that are the subject of a lot of research in this field. This is particularly the case of phenolic compounds plants that are widely renowned in therapeutics as anti-inflammatories, enzyme inhibitors, and antioxidants, particularly flavonoïds. With the aim of acquiring a better knowledge of the secondary metabolism of the vegetable kingdom in the region of Laghouat and of the discovering of new natural therapeutics, 10 extracts from 5 Saharan plant species were submitted to chemical screening.The analysis of the preceding biological targets led to the evaluation of the biological activity of the extracts of the species Genista Corsica. The first step, consists in extracting and quantifying phenolic compounds. The second step has been devoted to stugying the effects of phenolic compounds on the kinetics catalyzed by two enzymes belonging to the class of hydrolase (the α-amylase and α-glucosidase) responsible for the digestion of sugars and finally we evaluate the antiantioxidant potential. The analysis results of phenolic extracts show clearly a low content of phenolic compounds in investigated plants. Average total phenolics ranged from 0.0017 to 11.35 mg equivalent gallic acid/g of the crude extract. Whereas the total flavonoids content lie between 0.0015 and 10.,96 mg/g equivalent of rutin. The results of the kinetic study of enzymatic reactions show that the extracts have inhibitory effects on both enzymes, with IC50 values ranging from 95.03 µg/ml to 1033.53 µg/ml for the α-amylase and 279.99 µg/ml to 1215.43 µg/ml for α-glucosidase whose greatest inhibition was found for the acetone extract of June (IC50 = 95.03 µg/ml). The results the antioxidant activity determined by ABTS, DPPH, and phosphomolybdenum tests clearly showed a good antioxidant capacity comparatively to antioxidants taken as reference the biological potential of these plants and could find their use in medicine to replace synthetic products.

Keywords: phenolic extracts, inhibition effect, α-amylase, α-glucosidase, antioxidant activity

Procedia PDF Downloads 372
855 Predictors of Clinical Failure After Endoscopic Lumbar Spine Surgery During the Initial Learning Curve

Authors: Daniel Scherman, Daniel Madani, Shanu Gambhir, Marcus Ling Zhixing, Yingda Li

Abstract:

Objective: This study aims to identify clinical factors that may predict failed endoscopic lumbar spine surgery to guide surgeons with patient selection during the initial learning curve. Methods: This is an Australasian prospective analysis of the first 105 patients to undergo lumbar endoscopic spine decompression by 3 surgeons. Modified MacNab outcomes, Oswestry Disability Index (ODI) and Visual Analogue Score (VAS) scores were utilized to evaluate clinical outcomes at 6 months postoperatively. Descriptive statistics and Anova t-tests were performed to measure statistically significant (p<0.05) associations between variables using GraphPad Prism v10. Results: Patients undergoing endoscopic lumbar surgery via an interlaminar or transforaminal approach have overall good/excellent modified MacNab outcomes and a significant reduction in post-operative VAS and ODI scores. Regardless of the anatomical location of disc herniations, good/excellent modified MacNab outcomes and significant reductions in VAS and ODI were reported post-operatively; however, not in patients with calcified disc herniations. Patients with central and foraminal stenosis overall reported poor/fair modified MacNab outcomes. However, there were significant reductions in VAS and ODI scores post-operatively. Patients with subarticular stenosis or an associated spondylolisthesis reported good/excellent modified MacNab outcomes and significant reductions in VAS and ODI scores post-operatively. Patients with disc herniation and concurrent degenerative stenosis had generally poor/fair modified MacNab outcomes. Conclusion: The outcomes of endoscopic spine surgery are encouraging, with a low complication and reoperation rate. However, patients with calcified disc herniations, central canal stenosis or a disc herniation with concurrent degenerative stenosis present challenges during the initial learning curve and may benefit from traditional open or other minimally invasive techniques.

Keywords: complications, lumbar disc herniation, lumbar endoscopic spine surgery, predictors of failed endoscopic spine surgery

Procedia PDF Downloads 125