Search results for: electrical machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4624

Search results for: electrical machine

364 Optimal Beam for Accelerator Driven Systems

Authors: M. Paraipan, V. M. Javadova, S. I. Tyutyunnikov

Abstract:

The concept of energy amplifier or accelerator driven system (ADS) involves the use of a particle accelerator coupled with a nuclear reactor. The accelerated particle beam generates a supplementary source of neutrons, which allows the subcritical functioning of the reactor, and consequently a safe exploitation. The harder neutron spectrum realized ensures a better incineration of the actinides. The almost generalized opinion is that the optimal beam for ADS is represented by protons with energy around 1 GeV (gigaelectronvolt). In the present work, a systematic analysis of the energy gain for proton beams with energy from 0.5 to 3 GeV and ion beams from deuteron to neon with energies between 0.25 and 2 AGeV is performed. The target is an assembly of metallic U-Pu-Zr fuel rods in a bath of lead-bismuth eutectic coolant. The rods length is 150 cm. A beryllium converter with length 110 cm is used in order to maximize the energy released in the target. The case of a linear accelerator is considered, with a beam intensity of 1.25‧10¹⁶ p/s, and a total accelerator efficiency of 0.18 for proton beam. These values are planned to be achieved in the European Spallation Source project. The energy gain G is calculated as the ratio between the energy released in the target to the energy spent to accelerate the beam. The energy released is obtained through simulation with the code Geant4. The energy spent is calculating by scaling from the data about the accelerator efficiency for the reference particle (proton). The analysis concerns the G values, the net power produce, the accelerator length, and the period between refueling. The optimal energy for proton is 1.5 GeV. At this energy, G reaches a plateau around a value of 8 and a net power production of 120 MW (megawatt). Starting with alpha, ion beams have a higher G than 1.5 GeV protons. A beam of 0.25 AGeV(gigaelectronvolt per nucleon) ⁷Li realizes the same net power production as 1.5 GeV protons, has a G of 15, and needs an accelerator length 2.6 times lower than for protons, representing the best solution for ADS. Beams of ¹⁶O or ²⁰Ne with energy 0.75 AGeV, accelerated in an accelerator with the same length as 1.5 GeV protons produce approximately 900 MW net power, with a gain of 23-25. The study of the evolution of the isotopes composition during irradiation shows that the increase in power production diminishes the period between refueling. For a net power produced of 120 MW, the target can be irradiated approximately 5000 days without refueling, but only 600 days when the net power reaches 1 GW (gigawatt).

Keywords: accelerator driven system, ion beam, electrical power, energy gain

Procedia PDF Downloads 120
363 Contribution to the Hydrogeochemical Investigations on the Wajid Aquifer System, Southwestern Part of Saudi Arabia

Authors: Mohamed Ahmed, Ezat Korany, Abdelaziz Al Basam, Osama Kasem

Abstract:

The arid climate, low rate of precipitations and population reflect the increasing of groundwater uses as the main source of water in Saudi Arabia. The Wajid Aquifer System represents a regional groundwater aquifer system along the edge of the crystalline Arabian Shield near the southwestern tip of the Arabian Peninsula. The aquifer extends across the border of Saudi Arabia and Yemen from the Asir –Yemen Highlands to the Rub al Khali Depression and possibly to the Gulf coast (at the southwestern tip). The present work is representing a hydrogeochemical investigation on the Wajid Aquifer System. The studied area is being classified into three zones. The 1st zone is West of Wadi Ad Dawasir (Northern part of the studied area), the 2nd is Najran-Asir Zone (southern part of the studied area), and the 3rd zone is the intermediate -central zone (occupying the central area between the last two zones). The groundwater samples were collected and chemically analyzed for physicochemical properties such as pH, electrical conductivity, total hardness (TH), alkalinity (pH), total dissolved solids (TDS), major ions (Ca2+, Mg2+, Na+, K+, HCO3-, SO42- and Cl-), and trace elements. Some parameters such as sodium adsorption ratio (SAR), soluble sodium percentage (Na%), potential salinity, residual sodium carbonate, Kelly's ratio, permeability index and Gibbs ratio, hydrochemical coefficients, hydrochemical formula, ion dominance, salt combinations and water types were also calculated in order to evaluate the quality of the groundwater resources in the selected areas for different purposes. The distribution of the chemical constituents and their interrelationships are illustrated by different hydrochemical graphs. Groundwater depths and the depth to water were measured to study the effect of discharge on both the water level and the salinity of the studied groundwater wells. A detailed comparison between the three studied zones according to the variations shown by the chemical and field investigations are discussed in detailed within the work.

Keywords: Najran-Asir, Wadi Ad Dawasir, Wajid Aquifer System, effect of discharge

Procedia PDF Downloads 106
362 Synthesis of Ultra-Small Platinum, Palladium and Gold Nanoparticles by Electrochemically Active Biofilms and Their Enhanced Catalytic Activities

Authors: Elaf Ahmed, Shahid Rasul, Ohoud Alharbi, Peng Wang

Abstract:

Ultra-Small Nanoparticles of metals (USNPs) have attracted the attention from the perspective of both basic and developmental science in a wide range of fields. These NPs exhibit electrical, optical, magnetic, and catalytic phenomena. In addition, they are considered effective catalysts because of their enormously large surface area. Many chemical methods of synthesising USNPs are reported. However, the drawback of these methods is the use of different capping agents and ligands in the process of the production such as Polyvinylpyrrolidone, Thiol and Ethylene Glycol. In this research ultra-small nanoparticles of gold, palladium and platinum metal have been successfully produced using electrochemically active biofilm (EAB) after optimising the pH of the media. The production of ultra-small nanoparticles has been conducted in a reactor using a simple two steps method. Initially biofilm was grown on the surface of a carbon paper for 7 days using Shewanella Loihica bacteria. Then, biofilm was employed to synthesise platinum, palladium and gold nanoparticles in water using sodium lactate as electron donor without using any toxic chemicals at mild operating conditions. Electrochemically active biofilm oxidise the electron donor and produces electrons in the solution. Since these electrons are a strong reducing agent, they can reduce metal precursors quite effectively and quickly. The As-synthesized ultra-small nanoparticles have a size range between (2-7nm) and showed excellent catalytic activity on the degradation of methyl orange. The growth of metal USNPs is strongly related to the condition of the EAB. Where using low pH for the synthesis was not successful due to the fact that it might affect and destroy the bacterial cells. However, increasing the pH to 7 and 9, led to the successful formation of USNPs. By changing the pH value, we noticed a change in the size range of the produced NPs. The EAB seems to act as a Nano factory for the synthesis of metal nanoparticles by offering a green, sustainable and toxic free synthetic route without the use of any capping agents or ligands and depending only on their respiration pathway.

Keywords: electrochemically active biofilm, electron donor, shewanella loihica, ultra-small nanoparticles

Procedia PDF Downloads 176
361 Bio-Medical Equipment Technicians: Crucial Workforce to Improve Quality of Health Services in Rural Remote Hospitals in Nepal

Authors: C. M. Sapkota, B. P. Sapkota

Abstract:

Background: Continuous developments in science and technology are increasing the availability of thousands of medical devices – all of which should be of good quality and used appropriately to address global health challenges. It is obvious that bio medical devices are becoming ever more indispensable in health service delivery and among the key workforce responsible for their design, development, regulation, evaluation and training in their use: biomedical technician (BMET) is the crucial. As a pivotal member of health workforce, biomedical technicians are an essential component of the quality health service delivery mechanism supporting the attainment of the Sustainable Development Goals. Methods: The study was based on cross sectional descriptive design. Indicators measuring the quality of health services were assessed in Mechi Zonal Hospital (MZH) and Sagarmatha Zonal Hospital (SZH). Indicators were calculated based on the data about hospital utilization and performance of 2018 available in Medical record section of both hospitals. MZH had employed the BMET during 2018 but SZH had no BMET in 2018.Focus Group Discussion with health workers in both hospitals was conducted to validate the hospital records. Client exit interview was conducted to assess the level of client satisfaction in both the hospitals. Results: In MZH there was round the clock availability and utilization of Radio diagnostics equipment, Laboratory equipment. Operation Theater was functional throughout the year. Bed Occupancy rate in MZH was 97% but in SZH it was only 63%.In SZH, OT was functional only 54% of the days in 2018. CT scan machine was just installed but not functional. Computerized X-Ray in SZH was functional only in 72% of the days. Level of client satisfaction was 87% in MZH but was just 43% in SZH. MZH performed all (256) the Caesarean Sections but SZH performed only 36% of 210 Caesarean Sections in 2018. In annual performance ranking of Government Hospitals, MZH was placed in 1st rank while as SZH was placed in 19th rank out of 32 referral hospitals nationwide in 2018. Conclusion: Biomedical technicians are the crucial member of the human resource for health team with the pivotal role. Trained and qualified BMET professionals are required within health-care systems in order to design, evaluate, regulate, acquire, maintain, manage and train on safe medical technologies. Applying knowledge of engineering and technology to health-care systems to ensure availability, affordability, accessibility, acceptability and utilization of the safer, higher quality, effective, appropriate and socially acceptable bio medical technology to populations for preventive, promotive, curative, rehabilitative and palliative care across all levels of the health service delivery.

Keywords: biomedical equipment technicians, BMET, human resources for health, HRH, quality health service, rural hospitals

Procedia PDF Downloads 108
360 The Application of Transcranial Direct Current Stimulation (tDCS) Combined with Traditional Physical Therapy to Address Upper Limb Function in Chronic Stroke: A Case Study

Authors: Najmeh Hoseini

Abstract:

Strokerecovery happens through neuroplasticity, which is highly influenced by the environment, including neuro-rehabilitation. Transcranial direct current stimulation (tDCS) may enhance recovery by modulating neuroplasticity. With tDCS, weak direct currents are applied noninvasively to modify excitability in the cortical areas under its electrodes. Combined with functional activities, this may facilitate motor recovery in neurologic disorders such as stroke. The purpose of this case study was to examine the effect of tDCS combined with 30 minutes of traditional physical therapy (PT)on arm function following a stroke. A 29-year-old male with chronic stroke involving the left middle cerebral artery territory went through the treatment protocol. Design The design included 5 weeks of treatment: 1 week of traditional PT, 2 weeks of sham tDCS combined with traditional PT, and 2 weeks of tDCS combined with traditional PT. PT included functional electrical stimulation (FES) of wrist extensors followed by task-specific functional training. Dual hemispheric tDCS with 1 mA intensity was applied on the sensorimotor cortices for the first 20 min of the treatment combined with FES. Assessments before and after each treatment block included Modified Ashworth Scale, ChedokeMcmaster Arm and Hand inventory, Action Research Arm Test (ARAT), and the Box and Blocks Test. Results showed reduced spasticity in elbow and wrist flexors only after tDCS combination weeks (+1 to 0). The patient demonstrated clinically meaningful improvements in gross motor and fine motor control over the duration of the study; however, components of the ARAT that require fine motor control improved the greatest during the experimental block. Average time improvement compared to baseline was26.29 s for tDCS combination weeks, 18.48 s for sham tDCS, and 6.83 for PT standard of care weeks. Combining dual hemispheric tDCS with the standard of care PT demonstrated improvements in hand dexterity greater than PT alone in this patient case.

Keywords: tDCS, stroke, case study, physical therapy

Procedia PDF Downloads 72
359 Influence of Crystal Orientation on Electromechanical Behaviors of Relaxor Ferroelectric P(VDF-TRFE-CTFE) Terpolymer

Authors: Qing Liu, Jean-fabien Capsal, Claude Richard

Abstract:

In this current contribution, authors are dedicated to investigate influence of the crystal lamellae orientation on electromechanical behaviors of relaxor ferroelectric Poly (vinylidene fluoride –trifluoroethylene -chlorotrifluoroethylene) (P(VDF-TrFE-CTFE)) films by control of polymer microstructure, aiming to picture the full map of structure-property relationship. In order to define their crystal orientation films, terpolymer films were fabricated by solution-casting, stretching and hot-pressing process. Differential scanning calorimetry, impedance analyzer, and tensile strength techniques were employed to characterize crystallographic parameters, dielectric permittivity, and elastic Young’s modulus respectively. In addition, large electrical induced out-of-plane electrostrictive strain was obtained by cantilever beam mode. Consequently, as-casted pristine films exhibited surprisingly high electrostrictive strain 0.1774% due to considerably small value of elastic Young’s modulus although relatively low dielectric permittivity. Such reasons contributed to large mechanical elastic energy density. Instead, due to 2 folds increase of elastic Young’s modulus and less than 50% augmentation of dielectric constant, fully-crystallized film showed weak electrostrictive behavior and mechanical energy density as well. And subjected to mechanical stretching process, Film C exhibited stronger dielectric constant and out-performed electrostrictive strain over Film B because edge-on crystal lamellae orientation induced by uniaxially mechanical stretch. Hot-press films were compared in term of cooling rate. Rather large electrostrictive strain of 0.2788% for hot-pressed Film D in quenching process was observed although its dielectric permittivity equivalent to that of pristine as-casted Film A, showing highest mechanical elastic energy density value of 359.5 J/m^3. In hot-press cooling process, dielectric permittivity of Film E saw values at 48.8 concomitant with ca.100% increase of Young’s modulus. Films with intermediate mechanical energy density were obtained.

Keywords: crystal orientation, electrostroctive strain, mechanical energy density, permittivity, relaxor ferroelectric

Procedia PDF Downloads 354
358 PWM Harmonic Injection and Frequency-Modulated Triangular Carrier to Improve the Lives of the Transformers

Authors: Mario J. Meco-Gutierrez, Francisco Perez-Hidalgo, Juan R. Heredia-Larrubia, Antonio Ruiz-Gonzalez, Francisco Vargas-Merino

Abstract:

More and more applications power inverters connected to transformers, for example, the connection facilities to the power grid renewable generation. It is well known that the quality of signal power inverters it is not a pure sine. The harmonic content produced negative effects, one of which is the heating of electrical machines and therefore, affects the life of the machines. The decrease of life of transformers can be calculated by Arrhenius or Montsinger equation. Analyzing this expression any (long-term) decrease of a transformer temperature for 6º C - 7º C means doubles its life-expectancy. Methodologies: This work presents the technique of pulse width modulation (PWM) with an injection of harmonic and triangular frequency carrier modulated in frequency. This technique is used to improve the quality of the output voltage signal of the power inverters controlled PWM. The proposed technique increases in the fundamental term and a significant reduction in low order harmonics with the same commutations per time that control sine PWM. To achieve this, the modulating wave is compared to a triangular carrier with variable frequency over the period of the modulator. Therefore, it is, advantageous for the modulating signal to have a large amount of sinusoidal “information” in the areas of greater sampling. A triangular signal with a frequency that varies over the modulator’s period is used as a carrier, for obtaining more samples in the area with the greatest slope. A power inverter controlled by PWM proposed technique is connected to a transformer. Results: In order to verify the derived thermal parameters under different operation conditions, another ambient and loading scenario is involved for a further verification, which was sampled from the same power transformer. Temperatures of different parts of the transformer will be exposed for each PWM control technique analyzed. An assessment of the temperature be done with different techniques PWM control and hence the life of the transformer is calculated for each technique. Conclusion: This paper analyzes such as transformer heating produced by this technique and compared with other forms of PWM control. In it can be seen as a reduction the harmonic content produces less heat transformer and therefore, an increase in the life of the transformer.

Keywords: heating, power-inverter, PWM, transformer

Procedia PDF Downloads 394
357 Ground Water Pollution Investigation around Çorum Stream Basin in Turkey

Authors: Halil Bas, Unal Demiray, Sukru Dursun

Abstract:

Water and ground water pollution at the most of the countries is important problem. Investigation of water pollution source must be carried out to save fresh water. Because fresh water sources are very limited and recent sources are not enough for increasing population of world. In this study, investigation was carried out on pollution factors effecting the quality of the groundwater in Çorum Stream Basin in Turkey. Effect of geological structure of the region and the interaction between the stream and groundwater was researched. For the investigation, stream and groundwater sampling were performed at rainy and dry seasons to see if there is a change on quality parameters. The results were evaluated by the computer programs and then graphics, distribution maps were prepared. Thus, degree of the quality and pollution were tried to understand. According to analysis results, because the results of streams and the ground waters are not so close to each other we can say that there is no interaction between the stream and the groundwater. As the irrigation water, the stream waters are generally in the range between C3S1 region and the ground waters are generally in the range between C3S1 and C4S2 regions according to US Salinity Laboratory Diagram. According to Wilcox diagram stream waters are generally good-permissible and ground waters are generally good permissible, doubtful to unsuitable and unsuitable type. Especially ground waters are doubtful to unsuitable and unsuitable types in dry season. It may be assumed that as the result of relative increase in concentration of salt minerals. Especially samples from groundwater wells bored close to gypsium bearing units have high hardness, electrical conductivity and salinity values. Thus for drinking and irrigation these waters are determined as unsuitable. As a result of these studies, it is understood that the groundwater especially was effected by the lithological contamination rather than the anthropogenic or the other types of pollution. Because the alluvium is covered by the silt and clay lithology it is not affected by the anthropogenic and the other foreign factors. The results of solid waste disposal site leachate indicate that this site would have a risk potential for pollution in the future. Although the parameters did not exceed the maximum dangerous values it does not mean that they will not be dangerous in the future, and this case must be taken into account.

Keywords: Çorum, environment, groundwater, hydrogeology, geology, pollution, quality, stream

Procedia PDF Downloads 476
356 Social and Educational AI for Diversity: Research on Democratic Values to Develop Artificial Intelligence Tools to Guarantee Access for all to Educational Tools and Public Services

Authors: Roberto Feltrero, Sara Osuna-Acedo

Abstract:

Responsible Research and Innovation have to accomplish one fundamental aim: everybody has to participate in the benefits of innovation, but also innovation has to be democratic; that is to say, everybody may have the possibility to participate in the decisions in the innovation process. Particularly, a democratic and inclusive model of social participation and innovation includes persons with disabilities and people at risk of discrimination. Innovations on Artificial Intelligence for social development have to accomplish the same dual goal: improving equality for accessing fields of public interest like education, training and public services, as well as improving civic and democratic participation in the process of developing such innovations for all. This research aims to develop innovations, policies and policy recommendations to apply and disseminate such artificial intelligence and social model for making educational and administrative processes more accessible. First, designing a citizen participation process to engage citizens in the designing and use of artificial intelligence tools for public services. This will result in improving trust in democratic institutions contributing to enhancing the transparency, effectiveness, accountability and legitimacy of public policy-making and allowing people to participate in the development of ethical standards for the use of such technologies. Second, improving educational tools for lifelong learning with AI models to improve accountability and educational data management. Dissemination, education and social participation will be integrated, measured and evaluated in innovative educational processes to make accessible all the educational technologies and content developed on AI about responsible and social innovation. A particular case will be presented regarding access for all to educational tools and public services. This accessibility requires cognitive adaptability because, many times, legal or administrative language is very complex. Not only for people with cognitive disabilities but also for old people or citizens at risk of educational or social discrimination. Artificial Intelligence natural language processing technologies can provide tools to translate legal, administrative, or educational texts to a more simple language that can be accessible to everybody. Despite technological advances in language processing and machine learning, this becomes a huge project if we really want to respect ethical and legal consequences because that kinds of consequences can only be achieved with civil and democratic engagement in two realms: 1) to democratically select texts that need and can be translated and 2) to involved citizens, experts and nonexperts, to produce and validate real examples of legal texts with cognitive adaptations to feed artificial intelligence algorithms for learning how to translate those texts to a more simple and accessible language, adapted to any kind of population.

Keywords: responsible research and innovation, AI social innovations, cognitive accessibility, public participation

Procedia PDF Downloads 69
355 Determination of Cyclic Citrullinated Peptide Antibodies on Quartz Crystal Microbalance Based Nanosensors

Authors: Y. Saylan, F. Yılmaz, A. Denizli

Abstract:

Rheumatoid arthritis (RA) which is the most common autoimmune disorder of the body's own immune system attacking healthy cells. RA has both articular and systemic effects.Until now romatiod factor (RF) assay is used the most commonly diagnosed RA but it is not specific. Anti-cyclic citrullinated peptide (anti-CCP) antibodies are IgG autoantibodies which recognize citrullinated peptides and offer improved specificity in early diagnosis of RA compared to RF. Anti-CCP antibodies have specificity for the diagnosis of RA from 91 to 98% and the sensitivity rate of 41-68%. Molecularly imprinted polymers (MIP) are materials that are easy to prepare, less expensive, stable have a talent for molecular recognition and also can be manufactured in large quantities with good reproducibility. Molecular recognition-based adsorption techniques have received much attention in several fields because of their high selectivity for target molecules. Quartz crystal microbalance (QCM) is an effective, simple, inexpensive approach mass changes that can be converted into an electrical signal. The applications for specific determination of chemical substances or biomolecules, crystal electrodes, cover by the thin films for bind or adsorption of molecules. In this study, we have focused our attention on combining of molecular imprinting into nanofilms and QCM nanosensor approaches and producing QCM nanosensor for anti-CCP, chosen as a model protein, using anti-CCP imprinted nanofilms. For this aim, anti-CCP imprinted QCM nanosensor was characterized by Fourier transform infrared spectroscopy, atomic force microscopy, contact angle measurements and ellipsometry. The non-imprinted nanosensor was also prepared to evaluate the selectivity of the imprinted nanosensor. Anti-CCP imprinted QCM nanosensor was tested for real-time detection of anti-CCP from aqueous solution. The kinetic and affinity studies were determined by using anti-CCP solutions with different concentrations. The responses related with mass shifts (Δm) and frequency shifts (Δf) were used to evaluate adsorption properties and to calculate binding (Ka) and dissociation (Kd) constants. To show the selectivity of the anti-CCP imprinted QCM nanosensor, competitive adsorption of anti-CCP and IgM was investigated.The results indicate that anti-CCP imprinted QCM nanosensor has a higher adsorption capabilities for anti-CCP than for IgM, due to selective cavities in the polymer structure.

Keywords: anti-CCP, molecular imprinting, nanosensor, rheumatoid arthritis, QCM

Procedia PDF Downloads 347
354 Electrospun Nanofibers from Amphiphlic Block Copolymers and Their Graphene Nanocomposites

Authors: Hussein M. Etmimi, Peter E. Mallon

Abstract:

Electrospinning uses an electrical charge to draw very fine fibers (typically on the micro or nano scale) from a liquid or molten precursor. Over the years, this method has become a widely used and a successful technique to process polymer materials and their composites into nanofibers. The main focus of this work is to study the electrospinning of multi-phase amphiphilic copolymers and their nanocomposites, which contain graphene as the nanofiller material. In such amphiphilic materials, the constituents segments are incompatible and thus the solid state morphology will be determined by the composition of the various constituents as well as the method of preparation. In this study, amphiphilic block copolymers of poly(dimethyl siloxane) and poly(methyl methacrylate) (PDMS-b-PMMA) with well-defined structures were synthesized and the solution electrospinning of these materials and their properties were investigated. Atom transfer radical polymerization (ATRP) was used to obtain the controlled block copolymers with relatively high molar masses and narrow dispersity. First, PDMS macroinitiators with different chain length of 1000, 5000 and 10000 g/mol were synthesized by the reaction of monocarbinol terminated PDMS with α-bromoisobutyryl bromide initiator. The obtained macroinitiators were used for the polymerization of methyl methacrylate monomer to obtain the desired block copolymers using the ATRP process. Graphene oxide (GO) of different loading was then added to the copolymer solution and the resultant nanocomposites were successfully electrospun into nanofibers. The electrospinning was achieved using dimethylformamide/chloroform mixture (60:40 vl%) as electrospinning solution medium. Scanning electron microscopy (SEM) showed the successful formation of the electrospun fibers with dimensions in the nanometer range. X-ray diffraction indicated that the GO nanosheets were of an exfoliated structure, irrespective of the filler loading. Thermogravimetric analysis also showed that the thermal stability of the nanofibers was improved in the presence of GO, which was not a function of the filler loading. Differential scanning calorimetry also showed that the mechanical properties (measured as glass transition temperature) of the nanofibers was improved significantly in the presence of GO, which was a function of the filler loading.

Keywords: elctrospinning, graphene oxide, nanofibers, polymethyl methacrylate (PMMA)

Procedia PDF Downloads 289
353 Deep Groundwater Potential and Chemical Analysis Based on Well Logging Analysis at Kapuk-Cengkareng, West Jakarta, DKI Jakarta, Indonesia

Authors: Josua Sihotang

Abstract:

Jakarta Capital Special Region is the province that densely populated with rapidly growing infrastructure but less attention for the environmental condition. This makes some social problem happened like lack of clean water supply. Shallow groundwater and river water condition that has contaminated make the layer of deep water carrier (aquifer) should be done. This research aims to provide the people insight about deep groundwater potential and to determine the depth, location, and quality where the aquifer can be found in Jakarta’s area, particularly Kapuk-Cengkareng’s people. This research was conducted by geophysical method namely Well Logging Analysis. Well Logging is the geophysical method to know the subsurface lithology with the physical characteristic. The observation in this research area was conducted with several well devices that is Spontaneous Potential Log (SP Log), Resistivity Log, and Gamma Ray Log (GR Log). The first devices well is SP log which is work by comprising the electrical potential difference between the electrodes on the surface with the electrodes that is contained in the borehole and rock formations. The second is Resistivity Log, used to determine both the hydrocarbon and water zone based on their porosity and permeability properties. The last is GR Log, work by identifying radioactivity levels of rocks which is containing elements of thorium, uranium, or potassium. The observation result is curve-shaped which describes the type of lithological coating in subsurface. The result from the research can be interpreted that there are four of the deep groundwater layer zone with different quality. The good groundwater layer can be found in layers with good porosity and permeability. By analyzing the curves, it can be known that most of the layers which were found in this wellbore are clay stone with low resistivity and high gamma radiation. The resistivity value of the clay stone layers is about 2-4 ohm-meter with 65-80 Cps gamma radiation. There are several layers with high resistivity value and low gamma radiation (sand stone) that can be potential for being an aquifer. This is reinforced by the sand layer with a right-leaning SP log curve proving that this layer is permeable. These layers have 4-9 ohm-meter resistivity value with 40-65 Cps gamma radiation. These are mostly found as fresh water aquifer.

Keywords: aquifer, deep groundwater potential, well devices, well logging analysis

Procedia PDF Downloads 226
352 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients

Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho

Abstract:

Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).

Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper

Procedia PDF Downloads 118
351 The Effects on Hand Function with Robot-Assisted Rehabilitation for Children with Cerebral Palsy: A Pilot Study

Authors: Fen-Ling Kuo, Hsin-Chieh Lee, Han-Yun Hsiao, Jui-Chi Lin

Abstract:

Background: Children with cerebral palsy (CP) usually suffered from mild to maximum upper limb dysfunction such as having difficulty in reaching and picking up objects, which profoundly affects their participation in activities of daily living (ADLs). Robot-assisted rehabilitation provides intensive physical training in improving sensorimotor function of the hand. Many researchers have extensively studied the effects of robot-assisted therapy (RT) for the paretic upper limb in patients with stroke in recent years. However, few studies have examined the effect of RT on hand function in children with CP. The purpose of this study is to investigate the effectiveness of Gloreha Sinfonia, a robotic device with a dynamic arm support system mainly focus on distal upper-limb training, on improvements of hand function and ADLs in children with CP. Methods: Seven children with moderate CP were recruited in this case series study. RT using Gloreha Sinfonia was performed 2 sessions per week, 60 min per session for 6 consecutive weeks, with 12 times in total. Outcome measures included the Fugl-Meyer Assessment-upper extremity (FMA-UE), the Box and Block Test, the electromyography activity of the extensor digitorum communis muscle (EDC) and brachioradialis (BR), a grip dynamometer for motor evaluation, and the ABILHAND-Kids for measuring manual ability to manage daily activities, were performed at baseline, after 12 sessions (end of treatment) and at the 1-month follow-up. Results: After 6 weeks of robot-assisted treatment of hand function, there were significant increases in FMA-UE shoulder/elbow scores (p=0.002), FMA-UE wrist/hand scores (p=0.002), and FMA-UE total scores (p=0.002). There were also significant improvements in the BR mean value (p = 0.015) and electrical agonist-antagonist muscle ratio (p=0.041) in grasping a 1-inch cube task. These gains were maintained for a month after the end of the intervention. Conclusion: RT using Gloreha Sinfonia for hand function training may contribute toward the improvement of upper extremity function and efficacy in recruiting BR muscle in children with CP. The results were maintained at one month after intervention.

Keywords: activities of daily living, cerebral palsy, hand function, robotic rehabilitation

Procedia PDF Downloads 100
350 Enhanced Furfural Extraction from Aqueous Media Using Neoteric Hydrophobic Solvents

Authors: Ahmad S. Darwish, Tarek Lemaoui, Hanifa Taher, Inas M. AlNashef, Fawzi Banat

Abstract:

This research reports a systematic top-down approach for designing neoteric hydrophobic solvents –particularly, deep eutectic solvents (DES) and ionic liquids (IL)– as furfural extractants from aqueous media for the application of sustainable biomass conversion. The first stage of the framework entailed screening 32 neoteric solvents to determine their efficacy against toluene as the application’s conventional benchmark for comparison. The selection criteria for the best solvents encompassed not only their efficiency in extracting furfural but also low viscosity and minimal toxicity levels. Additionally, for the DESs, their natural origins, availability, and biodegradability were also taken into account. From the screening pool, two neoteric solvents were selected: thymol:decanoic acid 1:1 (Thy:DecA) and trihexyltetradecyl phosphonium bis(trifluoromethylsulfonyl) imide [P₁₄,₆,₆,₆][NTf₂]. These solvents outperformed the toluene benchmark, achieving efficiencies of 94.1% and 97.1% respectively, compared to toluene’s 81.2%, while also possessing the desired properties. These solvents were then characterized thoroughly in terms of their physical properties, thermal properties, critical properties, and cross-contamination solubilities. The selected neoteric solvents were then extensively tested under various operating conditions, and an exceptional stable performance was exhibited, maintaining high efficiency across a broad range of temperatures (15–100 °C), pH levels (1–13), and furfural concentrations (0.1–2.0 wt%) with a remarkable equilibrium time of only 2 minutes, and most notably, demonstrated high efficiencies even at low solvent-to-feed ratios. The durability of the neoteric solvents was also validated to be stable over multiple extraction-regeneration cycles, with limited leachability to the aqueous phase (≈0.1%). Moreover, the extraction performance of the solvents was then modeled through machine learning, specifically multiple non-linear regression (MNLR) and artificial neural networks (ANN). The models demonstrated high accuracy, indicated by their low absolute average relative deviations with values of 2.74% and 2.28% for Thy:DecA and [P₁₄,₆,₆,₆][NTf₂], respectively, using MNLR, and 0.10% for Thy:DecA and 0.41% for [P₁₄,₆,₆,₆][NTf₂] using ANN, highlighting the significantly enhanced predictive accuracy of the ANN. The neoteric solvents presented herein offer noteworthy advantages over traditional organic solvents, including their high efficiency in both extraction and regeneration processes, their stability and minimal leachability, making them particularly suitable for applications involving aqueous media. Moreover, these solvents are more environmentally friendly, incorporating renewable and sustainable components like thymol and decanoic acid. This exceptional efficacy of the newly developed neoteric solvents signifies a significant advancement, providing a green and sustainable alternative for furfural production from biowaste.

Keywords: sustainable biomass conversion, furfural extraction, ionic liquids, deep eutectic solvents

Procedia PDF Downloads 47
349 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 168
348 Self-Supervised Learning for Hate-Speech Identification

Authors: Shrabani Ghosh

Abstract:

Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.

Keywords: attention learning, language model, offensive language detection, self-supervised learning

Procedia PDF Downloads 87
347 Preparation and Characterization of CO-Tolerant Electrocatalyst for PEM Fuel Cell

Authors: Ádám Vass, István Bakos, Irina Borbáth, Zoltán Pászti, István Sajó, András Tompos

Abstract:

Important requirements for the anode side electrocatalysts of polymer electrolyte membrane (PEM) fuel cells are CO-tolerance, stability and corrosion resistance. Carbon is still the most common material for electrocatalyst supports due to its low cost, high electrical conductivity and high surface area, which can ensure good dispersion of the Pt. However, carbon becomes degraded at higher potentials and it causes problem during application. Therefore it is important to explore alternative materials with improved stability. Molybdenum-oxide can improve the CO-tolerance of the Pt/C catalysts, but it is prone to leach in acidic electrolyte. The Mo was stabilized by isovalent substitution of molybdenum into the rutile phase titanium-dioxide lattice, achieved by a modified multistep sol-gel synthesis method optimized for preparation of Ti0.7Mo.3O2-C composite. High degree of Mo incorporation into the rutile lattice was developed. The conductivity and corrosion resistance across the anticipated potential/pH window was ensured by mixed oxide – activated carbon composite. Platinum loading was carried out using NaBH4 and ethylene glycol; platinum content was 40 wt%. The electrocatalyst was characterized by both material investigating methods (i.e. XRD, TEM, EDS, XPS techniques) and electrochemical methods (cyclic-voltammetry, COads stripping voltammetry, hydrogen oxidation reaction on rotating disc electrode). The electrochemical activity of the sample was compared to commercial 40 wt% Pt/C (Quintech) and PtRu/C (Quintech, Pt= 20 wt%, Ru= 10 wt%) references. Enhanced CO tolerance of the electrocatalyst prepared using the Ti0.7Mo.3O2-C composite material was evidenced by the appearance of a CO-oxidation related 'pre-peak' and by the pronounced shift of the maximum of the main CO oxidation peak towards less positive potential compared to Pt/C. Fuel cell polarization measurements were also carried out using Bio-Logic and Paxitech FCT-150S test device. All details on the design, preparation, characterization and testing by both electrochemical measurements and fuel cell test device of electrocatalyst supported on Ti0.7Mo.3O2-C composite material will be presented and discussed.

Keywords: anode electrocatalyst, composite material, CO-tolerance, TiMoOx

Procedia PDF Downloads 271
346 Breast Cancer Sensing and Imaging Utilized Printed Ultra Wide Band Spherical Sensor Array

Authors: Elyas Palantei, Dewiani, Farid Armin, Ardiansyah

Abstract:

High precision of printed microwave sensor utilized for sensing and monitoring the potential breast cancer existed in women breast tissue was optimally computed. The single element of UWB printed sensor that successfully modeled through several numerical optimizations was multiple fabricated and incorporated with woman bra to form the spherical sensors array. One sample of UWB microwave sensor obtained through the numerical computation and optimization was chosen to be fabricated. In overall, the spherical sensors array consists of twelve stair patch structures, and each element was individually measured to characterize its electrical properties, especially the return loss parameter. The comparison of S11 profiles of all UWB sensor elements is discussed. The constructed UWB sensor is well verified using HFSS programming, CST programming, and experimental measurement. Numerically, both HFSS and CST confirmed the potential operation bandwidth of UWB sensor is more or less 4.5 GHz. However, the measured bandwidth provided is about 1.2 GHz due to the technical difficulties existed during the manufacturing step. The configuration of UWB microwave sensing and monitoring system implemented consists of 12 element UWB printed sensors, vector network analyzer (VNA) to perform as the transceiver and signal processing part, the PC Desktop/Laptop acting as the image processing and displaying unit. In practice, all the reflected power collected from whole surface of artificial breast model are grouped into several numbers of pixel color classes positioned on the corresponding row and column (pixel number). The total number of power pixels applied in 2D-imaging process was specified to 100 pixels (or the power distribution pixels dimension 10x10). This was determined by considering the total area of breast phantom of average Asian women breast size and synchronizing with the single UWB sensor physical dimension. The interesting microwave imaging results were plotted and together with some technical problems arisen on developing the breast sensing and monitoring system are examined in the paper.

Keywords: UWB sensor, UWB microwave imaging, spherical array, breast cancer monitoring, 2D-medical imaging

Procedia PDF Downloads 172
345 Importance of an E-Learning Program in Stress Field for Postgraduate Courses of Doctors

Authors: Ramona-Niculina Jurcau, Ioana-Marieta Jurcau

Abstract:

Background: Preparing in the stress field (SF) is, increasingly, a concern for doctors of different specialties. Aims: The aim was to evaluate the importance of an e-learning program for doctors postgraduate courses, in SF. Methods: Doctors (n= 40 male, 40 female) of different specialties and ages (31-71 years), who attended postgraduate courses in SF, voluntarily responded to a questionnaire that included the following themes: Importance of SF courses for specialty practiced by each respondent doctor (using visual analogue scale, VAS); What SF themes would be indicated as e-learning (EL); Preferred form of SF information assimilation: Classical lectures (CL), EL or a combination of these methods (CL+EL); Which information on the SF course are facilitated by EL model versus CL; In their view which are the first four advantages and the first four disadvantages of EL compared to CL, for SF. Results: To most respondents, the SF courses are important for the specialty they practiced (VAS by an average of 4). The SF themes suggested to be done as EL were: Stress mechanisms; stress factor models for different medical specialties; stress assessment methods; primary stress management methods for different specialties. Preferred form of information assimilation was CL+EL. Aspects of the course facilitated by EL versus CL model: Active reading of theoretical information, with fast access to keywords details; watching documentaries in everyone's favorite order; practice through tests and the rapid control of results. The first four EL advantages, mentioned for SF were: Autonomy in managing the time allocated to the study; saving time for traveling to the venue; the ability to read information in various contexts of time and space; communication with colleagues, in good times for everyone. The first three EL disadvantages, mentioned for SF were: It decreases capabilities for group discussion and mobilization for active participation; EL information accession may depend on electrical source or/and Internet; learning slowdown can appear, by temptation of postponing the implementation. Answering questions was partially influenced by the respondent's age and genre. Conclusions: 1) Post-graduate courses in SF are of interest to doctors of different specialties. 2) The majority of participating doctors preferred EL, but combined with CL (CL+EL). 3) Preference for EL was manifested mainly by young or middle age men doctors. 4) It is important to balance the proper formula for chosen EL, to be the most efficient, interesting, useful and agreeable.

Keywords: stress field, doctors’ postgraduate courses, classical lectures, e-learning lecture

Procedia PDF Downloads 214
344 Measuring Biobased Content of Building Materials Using Carbon-14 Testing

Authors: Haley Gershon

Abstract:

The transition from using fossil fuel-based building material to formulating eco-friendly and biobased building materials plays a key role in sustainable building. The growing demand on a global level for biobased materials in the building and construction industries heightens the importance of carbon-14 testing, an analytical method used to determine the percentage of biobased content that comprises a material’s ingredients. This presentation will focus on the use of carbon-14 analysis within the building materials sector. Carbon-14, also known as radiocarbon, is a weakly radioactive isotope present in all living organisms. Any fossil material older than 50,000 years will not contain any carbon-14 content. The radiocarbon method is thus used to determine the amount of carbon-14 content present in a given sample. Carbon-14 testing is performed according to ASTM D6866, a standard test method developed specifically for biobased content determination of material in solid, liquid, or gaseous form, which requires radiocarbon dating. Samples are combusted and converted into a solid graphite form and then pressed onto a metal disc and mounted onto a wheel of an accelerator mass spectrometer (AMS) machine for the analysis. The AMS instrument is used in order to count the amount of carbon-14 present. By submitting samples for carbon-14 analysis, manufacturers of building materials can confirm the biobased content of ingredients used. Biobased testing through carbon-14 analysis reports results as percent biobased content, indicating the percentage of ingredients coming from biomass sourced carbon versus fossil carbon. The analysis is performed according to standardized methods such as ASTM D6866, ISO 16620, and EN 16640. Products 100% sourced from plants, animals, or microbiological material are therefore 100% biobased, while products sourced only from fossil fuel material are 0% biobased. Any result in between 0% and 100% biobased indicates that there is a mixture of both biomass-derived and fossil fuel-derived sources. Furthermore, biobased testing for building materials allows manufacturers to submit eligible material for certification and eco-label programs such as the United States Department of Agriculture (USDA) BioPreferred Program. This program includes a voluntary labeling initiative for biobased products, in which companies may apply to receive and display the USDA Certified Biobased Product label, stating third-party verification and displaying a product’s percentage of biobased content. The USDA program includes a specific category for Building Materials. In order to qualify for the biobased certification under this product category, examples of product criteria that must be met include minimum 62% biobased content for wall coverings, minimum 25% biobased content for lumber, and a minimum 91% biobased content for floor coverings (non-carpet). As a result, consumers can easily identify plant-based products in the marketplace.

Keywords: carbon-14 testing, biobased, biobased content, radiocarbon dating, accelerator mass spectrometry, AMS, materials

Procedia PDF Downloads 144
343 Superordinated Control for Increasing Feed-in Capacity and Improving Power Quality in Low Voltage Distribution Grids

Authors: Markus Meyer, Bastian Maucher, Rolf Witzmann

Abstract:

The ever increasing amount of distributed generation in low voltage distribution grids (mainly PV and micro-CHP) can lead to reverse load flows from low to medium/high voltage levels at times of high feed-in. Reverse load flow leads to rising voltages that may even exceed the limits specified in the grid codes. Furthermore, the share of electrical loads connected to low voltage distribution grids via switched power supplies continuously increases. In combination with inverter-based feed-in, this results in high harmonic levels reducing overall power quality. Especially high levels of third-order harmonic currents can lead to neutral conductor overload, which is even more critical if lines with reduced neutral conductor section areas are used. This paper illustrates a possible concept for smart grids in order to increase the feed-in capacity, improve power quality and to ensure safe operation of low voltage distribution grids at all times. The key feature of the concept is a hierarchically structured control strategy that is run on a superordinated controller, which is connected to several distributed grid analyzers and inverters via broad band powerline (BPL). The strategy is devised to ensure both quick response time as well as the technically and economically reasonable use of the available inverters in the grid (PV-inverters, batteries, stepless line voltage regulators). These inverters are provided with standard features for voltage control, e.g. voltage dependent reactive power control. In addition they can receive reactive power set points transmitted by the superordinated controller. To further improve power quality, the inverters are capable of active harmonic filtering, as well as voltage balancing, whereas the latter is primarily done by the stepless line voltage regulators. By additionally connecting the superordinated controller to the control center of the grid operator, supervisory control and data acquisition capabilities for the low voltage distribution grid are enabled, which allows easy monitoring and manual input. Such a low voltage distribution grid can also be used as a virtual power plant.

Keywords: distributed generation, distribution grid, power quality, smart grid, virtual power plant, voltage control

Procedia PDF Downloads 248
342 Ultra-High Molecular Weight Polyethylene (UHMWPE) for Radiation Dosimetry Applications

Authors: Malik Sajjad Mehmood, Aisha Ali, Hamna Khan, Tariq Yasin, Masroor Ikram

Abstract:

Ultra-high molecular weight polyethylene (UHMWPE) is one of the polymers belongs to polyethylene (PE) family having monomer –CH2– and average molecular weight is approximately 3-6 million g/mol. Due its chemical, mechanical, physical and biocompatible properties, it has been extensively used in the field of electrical insulation, medicine, orthopedic, microelectronics, engineering, chemistry and the food industry etc. In order to alter/modify the properties of UHMWPE for particular application of interest, certain various procedures are in practice e.g. treating the material with high energy irradiations like gamma ray, e-beam, and ion bombardment. Radiation treatment of UHMWPE induces free radicals within its matrix, and these free radicals are the precursors of chain scission, chain accumulation, formation of double bonds, molecular emission, crosslinking etc. All the aforementioned physical and chemical processes are mainly responsible for the modification of polymers properties to use them in any particular application of our interest e.g. to fabricate LEDs, optical sensors, antireflective coatings, polymeric optical fibers, and most importantly for radiation dosimetry applications. It is therefore, to check the feasibility of using UHMWPE for radiation dosimetery applications, the compressed sheets of UHMWPE were irradiated at room temperature (~25°C) for total dose values of 30 kGy and 100 kGy, respectively while one were kept un-irradiated as reference. Transmittance data (from 400 nm to 800 nm) of e-beam irradiated UHMWPE and its hybrids were measured by using Muller matrix spectro-polarimeter. As a result significant changes occur in the absorption behavior of irradiated samples. To analyze these (radiation induced) changes in polymer matrix Urbach edge method and modified Tauc’s equation has been used. The results reveal that optical activation energy decreases with irradiation. The values of activation energies are 2.85 meV, 2.48 meV, and 2.40 meV for control, 30 kGy, and 100 kGy samples, respectively. Direct and indirect energy band gaps were also found to decrease with irradiation due to variation of C=C unsaturation in clusters. We believe that the reported results would open new horizons for radiation dosimetery applications.

Keywords: electron beam, radiation dosimetry, Tauc’s equation, UHMWPE, Urbach method

Procedia PDF Downloads 392
341 Estimation of Carbon Losses in Rice: Wheat Cropping System of Punjab, Pakistan

Authors: Saeed Qaisrani

Abstract:

The study was conducted to observe carbon and nutrient loss by burning of rice residues on rice-wheat cropping system The rice crop was harvested to conduct the experiment in a randomized complete block design (RCBD) with factors and 4 replications with a net plot size of 10 m x 20 m. Rice stubbles were managed by two methods i.e. Incorporation & burning of rice residues. Soil samples were taken to a depth of 30 cm before sowing & after harvesting of wheat. Wheat was sown after harvesting of rice by three practices i.e. Conventional tillage, Minimum tillage and Zero tillage to observe best tillage practices. Laboratory and field experiments were conducted on wheat to assess best tillage practice and residues management method with estimation of carbon losses. Data on the following parameters; establishment count, plant height, spike length, number of grains per spike, biological yield, fat content, carbohydrate content, protein content, and harvest index were recorded to check wheat quality & ensuring food security in the region. Soil physico-chemical analysis i.e. pH, electrical conductivity, organic matter, nitrogen, phosphorus, potassium, and carbon were done in soil fertility laboratory. Substantial results were found on growth, yield and related parameters of wheat crop. The collected data were examined statistically with economic analysis to estimate the cost-benefit ratio of using different tillage techniques and residue management practices. Obtained results depicted that Zero tillage method have positive impacts on growth, yield and quality of wheat, Moreover, it is cost effective methodology. Similarly, Incorporation is suitable and beneficial method for soil due to more nutrients provision and reduce the need of fertilizers. Burning of rice stubbles has negative impact including air pollution, nutrient loss, microbes died and carbon loss. Recommended the zero tillage technology to reduce carbon losses along with food security in Pakistan.

Keywords: agricultural agronomy, food security, carbon sequestration, rice-wheat cropping system

Procedia PDF Downloads 259
340 Effective Doping Engineering of Na₃V₂(PO₄)₂F₃ as a High-Performance Cathode Material for Sodium-Ion Batteries

Authors: Ramon Alberto Paredes Camacho, Li Lu

Abstract:

Sustainable batteries are possible through the development of cheaper and greener alternatives whose most feasible option is epitomized by Sodium-Ion Batteries (SIB). Na₃V₂(PO₄)₂F₃ (NVPF) an important member of the Na-superionic-conductor (NASICON) materials, has recently been in the spotlight due to its interesting electrochemical properties when used as cathode namely, high specific capacity of 128 mA h g-¹, high energy density of 507 W h Kg-¹, increased working potential at which vanadium redox couples can be activated (with an average value around 3.9 V), and small volume variation of less than 2%. These traits grant NVPF an excellent perspective as a cathode material for the next generation of sodium batteries. Unfortunately, because of its low inherent electrical conductivity and a high energy barrier that impedes the mobilization of all the available Na ions per formula, the overall electrochemical performance suffers substantial degradation, finally obstructing its industrial use. Many approaches have been developed to remediate these issues where nanostructural design, carbon coating, and ion doping are the most effective ones. This investigation is focused on enhancing the electrochemical response of NVPF by doping metal ions in the crystal lattice, substituting vanadium atoms. A facile sol-gel process is employed, with citric acid as the chelator and the carbon source. The optimized conditions circumvent fluorine sublimation, ratifying the material’s purity. One of the reasons behind the large ionic improvement is the attraction of extra Na ions into the crystalline structure due to a charge imbalance produced by the valence of the doped ions (+2), which is lower than the one of vanadium (+3). Superior stability (higher than 90% at a current density of 20C) and capacity retention at an extremely high current density of 50C are demonstrated by our doped NVPF. This material continues to retain high capacity values at low and high temperatures. In addition, full cell NVPF//Hard Carbon shows capacity values and high stability at -20 and 60ºC. Our doping strategy proves to significantly increase the ionic and electronic conductivity of NVPF even at extreme conditions, delivering outstanding electrochemical performance and paving the way for advanced high-potential cathode materials.

Keywords: sodium-ion batteries, cathode materials, NASICON, Na3V2(PO4)2F3, Ion doping

Procedia PDF Downloads 33
339 Analysis of Digital Transformation in Banking: The Hungarian Case

Authors: Éva Pintér, Péter Bagó, Nikolett Deutsch, Miklós Hetényi

Abstract:

The process of digital transformation has a profound influence on all sectors of the worldwide economy and the business environment. The influence of blockchain technology can be observed in the digital economy and e-government, rendering it an essential element of a nation's growth strategy. The banking industry is experiencing significant expansion and development of financial technology firms. Utilizing developing technologies such as artificial intelligence (AI), machine learning (ML), and big data (BD), these entrants are offering more streamlined financial solutions, promptly addressing client demands, and presenting a challenge to incumbent institutions. The advantages of digital transformation are evident in the corporate realm, and firms that resist its adoption put their survival at risk. The advent of digital technologies has revolutionized the business environment, streamlining processes and creating opportunities for enhanced communication and collaboration. Thanks to the aid of digital technologies, businesses can now swiftly and effortlessly retrieve vast quantities of information, all the while accelerating the process of creating new and improved products and services. Big data analytics is generally recognized as a transformative force in business, considered the fourth paradigm of science, and seen as the next frontier for innovation, competition, and productivity. Big data, an emerging technology that is shaping the future of the banking sector, offers numerous advantages to banks. It enables them to effectively track consumer behavior and make informed decisions, thereby enhancing their operational efficiency. Banks may embrace big data technologies to promptly and efficiently identify fraud, as well as gain insights into client preferences, which can then be leveraged to create better-tailored products and services. Moreover, the utilization of big data technology empowers banks to develop more intelligent and streamlined models for accurately recognizing and focusing on the suitable clientele with pertinent offers. There is a scarcity of research on big data analytics in the banking industry, with the majority of existing studies only examining the advantages and prospects associated with big data. Although big data technologies are crucial, there is a dearth of empirical evidence about the role of big data analytics (BDA) capabilities in bank performance. This research addresses a gap in the existing literature by introducing a model that combines the resource-based view (RBV), the technical organization environment framework (TOE), and dynamic capability theory (DC). This study investigates the influence of Big Data Analytics (BDA) utilization on the performance of market and risk management. This is supported by a comparative examination of Hungarian mobile banking services.

Keywords: big data, digital transformation, dynamic capabilities, mobile banking

Procedia PDF Downloads 30
338 An Investigation on the Sandwich Panels with Flexible and Toughened Adhesives under Flexural Loading

Authors: Emre Kara, Şura Karakuzu, Ahmet Fatih Geylan, Metehan Demir, Kadir Koç, Halil Aykul

Abstract:

The material selection in the design of the sandwich structures is very crucial aspect because of the positive or negative influences of the base materials to the mechanical properties of the entire panel. In the literature, it was presented that the selection of the skin and core materials plays very important role on the behavior of the sandwich. Beside this, the use of the correct adhesive can make the whole structure to show better mechanical results and behavior. By this way, the sandwich structures realized in the study were obtained with the combination of aluminum foam core and three different glass fiber reinforced polymer (GFRP) skins using two different commercial adhesives which are based on flexible polyurethane and toughened epoxy. The static and dynamic tests were already applied on the sandwiches with different types of adhesives. In the present work, the static three-point bending tests were performed on the sandwiches having an aluminum foam core with the thickness of 15 mm, the skins with three different types of fabrics ([0°/90°] cross ply E-Glass Biaxial stitched, [0°/90°] cross ply E-Glass Woven and [0°/90°] cross ply S-Glass Woven which have same thickness value of 1.75 mm) and two different commercial adhesives (flexible polyurethane and toughened epoxy based) at different values of support span distances (L= 55, 70, 80, 125 mm) by aiming the analyses of their flexural performance. The skins used in the study were produced via Vacuum Assisted Resin Transfer Molding (VARTM) technique and were easily bonded onto the aluminum foam core with flexible and toughened adhesives under a very low pressure using press machine with the alignment tabs having the total thickness of the whole panel. The main results of the flexural loading are: force-displacement curves obtained after the bending tests, peak force values, absorbed energy, collapse mechanisms, adhesion quality and the effect of the support span length and adhesive type. The experimental results presented that the sandwiches with epoxy based toughened adhesive and the skins made of S-Glass Woven fabrics indicated the best adhesion quality and mechanical properties. The sandwiches with toughened adhesive exhibited higher peak force and energy absorption values compared to the sandwiches with flexible adhesive. The core shear mode occurred in the sandwiches with flexible polyurethane based adhesive through the thickness of the core while the same mode took place in the sandwiches with toughened epoxy based adhesive along the length of the core. The use of these sandwich structures can lead to a weight reduction of the transport vehicles, providing an adequate structural strength under operating conditions.

Keywords: adhesive and adhesion, aluminum foam, bending, collapse mechanisms

Procedia PDF Downloads 304
337 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance

Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta

Abstract:

Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.

Keywords: glass plates, human impact test, modal test, plate boundary conditions

Procedia PDF Downloads 291
336 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 97
335 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 88