Search results for: image processing of electrical impedance tomography
782 Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19
Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Daniel Zaccariello
Abstract:
Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language.Keywords: facial action coding system, COVID-19, masks, facial analysis
Procedia PDF Downloads 79781 Exploring Polypnenolics Content and Antioxidant Activity of R. damascena Dry Extract by Spectroscopic and Chromatographic Techniques
Authors: Daniela Nedeltcheva-Antonova, Kamelia Getchovska, Vera Deneva, Stanislav Bozhanov, Liudmil Antonov
Abstract:
Rosa damascena Mill. (Damask rose) is one of the most important plants belonging to the Rosaceae family, with a long historical use in traditional medicine and as a valuable oil-bearing plant. Many pharmacological effects have been reported from this plant, including anti-inflammatory, hypnotic, analgesic, anticonvulsant, anti-depressant, antianxiety, antitussive, antidiabetic, relaxant effects on tracheal chains, laxative, prokinetic and hepatoprotective activities. Pharmacological studies have shown that the various health effects of R. damascena flowers can mainly be attributed to its large amount of polyphenolic components. Phenolics possess a wide range of pharmacological activities, such as antioxidants, free-radical scavengers, anticancer, anti-inflammatory, antimutagenic, and antidepressant, with flavonoids being the most numerous group of natural polyphenolic compounds. According to the technological process in the production of rose concrete (solvent extraction with non-polar solvents of fresh rose flowers), it can be assumed that the resulting plant residue would be as rich of polyphenolics, as the plant itself, and could be used for the development of novel products with promising health-promoting effect. Therefore, an optimisation of the extraction procedure of the by-product from the rose concrete production was carried out. An assay of the extracts in respect of their total polyphenols and total flavonoids content was performed. HPLC analysis of quercetin and kaempferol, the two main flavonoids found in R. damascena, was also carried out. The preliminary results have shown that the flavonoid content in the rose extracts is comparable to that of the green tea or Gingko biloba, and they could be used for the development of various products (food supplements, natural cosmetics and phyto-pharmaceutical formulation, etc.). The fact that they are derived from the by-product of industrial plant processing could add the marketing value of the final products in addition to the well-known reputation of the products obtained from Bulgarian roses (R. damascena Mill.).Keywords: gas chromatography-mass-spectromrtry, dry extract, flavonoids, Rosa damascena Mill
Procedia PDF Downloads 153780 Integrating Wearable-Textiles Sensors and IoT for Continuous Electromyography Monitoring
Authors: Bulcha Belay Etana, Benny Malengier, Debelo Oljira, Janarthanan Krishnamoorthy, Lieva Vanlangenhove
Abstract:
Electromyography (EMG) is a technique used to measure the electrical activity of muscles. EMG can be used to assess muscle function in a variety of settings, including clinical, research, and sports medicine. The aim of this study was to develop a wearable textile sensor for EMG monitoring. The sensor was designed to be soft, stretchable, and washable, making it suitable for long-term use. The sensor was fabricated using a conductive thread material that was embroidered onto a fabric substrate. The sensor was then connected to a microcontroller unit (MCU) and a Wi-Fi-enabled module. The MCU was programmed to acquire the EMG signal and transmit it wirelessly to the Wi-Fi-enabled module. The Wi-Fi-enabled module then sent the signal to a server, where it could be accessed by a computer or smartphone. The sensor was able to successfully acquire and transmit EMG signals from a variety of muscles. The signal quality was comparable to that of commercial EMG sensors. The development of this sensor has the potential to improve the way EMG is used in a variety of settings. The sensor is soft, stretchable, and washable, making it suitable for long-term use. This makes it ideal for use in clinical settings, where patients may need to wear the sensor for extended periods of time. The sensor is also small and lightweight, making it ideal for use in sports medicine and research settings. The data for this study was collected from a group of healthy volunteers. The volunteers were asked to perform a series of muscle contractions while the EMG signal was recorded. The data was then analyzed to assess the performance of the sensor. The EMG signals were analyzed using a variety of methods, including time-domain analysis and frequency-domain analysis. The time-domain analysis was used to extract features such as the root mean square (RMS) and average rectified value (ARV). The frequency-domain analysis was used to extract features such as the power spectrum. The question addressed by this study was whether a wearable textile sensor could be developed that is soft, stretchable, and washable and that can successfully acquire and transmit EMG signals. The results of this study demonstrate that a wearable textile sensor can be developed that meets the requirements of being soft, stretchable, washable, and capable of acquiring and transmitting EMG signals. This sensor has the potential to improve the way EMG is used in a variety of settings.Keywords: EMG, electrode position, smart wearable, textile sensor, IoT, IoT-integrated textile sensor
Procedia PDF Downloads 75779 Effect of Perceived Importance of a Task in the Prospective Memory Task
Authors: Kazushige Wada, Mayuko Ueda
Abstract:
In the present study, we reanalyzed lapse errors in the last phase of a job, by re-counting near lapse errors and increasing the number of participants. We also examined the results of this study from the perspective of prospective memory (PM), which concerns future actions. This study was designed to investigate whether perceiving the importance of PM tasks caused lapse errors in the last phase of a job and to determine if such errors could be explained from the perspective of PM processing. Participants (N = 34) conducted a computerized clicking task, in which they clicked on 10 figures that they had learned in advance in 8 blocks of 10 trials. Participants were requested to click the check box in the start display of a block and to click the checking off box in the finishing display. This task was a PM task. As a measure of PM performance, we counted the number of omission errors caused by forgetting to check off in the finishing display, which was defined as a lapse error. The perceived importance was manipulated by different instructions. Half the participants in the highly important task condition were instructed that checking off was very important, because equipment would be overloaded if it were not done. The other half in the not important task condition was instructed only about the location and procedure for checking off. Furthermore, we controlled workload and the emotion of surprise to confirm the effect of demand capacity and attention. To manipulate emotions during the clicking task, we suddenly presented a photo of a traffic accident and the sound of a skidding car followed by an explosion. Workload was manipulated by requesting participants to press the 0 key in response to a beep. Results indicated too few forgetting induced lapse errors to be analyzed. However, there was a weak main effect of the perceived importance of the check task, in which the mouse moved to the “END” button before moving to the check box in the finishing display. Especially, the highly important task group showed more such near lapse errors, than the not important task group. Neither surprise, nor workload affected the occurrence of near lapse errors. These results imply that high perceived importance of PM tasks impair task performance. On the basis of the multiprocess framework of PM theory, we have suggested that PM task performance in this experiment relied not on monitoring PM tasks, but on spontaneous retrieving.Keywords: prospective memory, perceived importance, lapse errors, multi process framework of prospective memory.
Procedia PDF Downloads 446778 Ytterbium Advantages for Brachytherapy
Authors: S. V. Akulinichev, S. A. Chaushansky, V. I. Derzhiev
Abstract:
High dose rate (HDR) brachytherapy is a method of contact radiotherapy, when a single sealed source with an activity of about 10 Ci is temporarily inserted in the tumor area. The isotopes Ir-192 and (much less) Co-60 are used as active material for such sources. The other type of brachytherapy, the low dose rate (LDR) brachytherapy, implies the insertion of many permanent sources (up to 200) of lower activity. The pulse dose rate (PDR) brachytherapy can be considered as a modification of HDR brachytherapy, when the single source is repeatedly introduced in the tumor region in a pulse regime during several hours. The PDR source activity is of the order of one Ci and the isotope Ir-192 is currently used for these sources. The PDR brachytherapy is well recommended for the treatment of several tumors since, according to oncologists, it combines the medical benefits of both HDR and LDR types of brachytherapy. One of the main problems for the PDR brachytherapy progress is the shielding of the treatment area since the longer stay of patients in a shielded canyon is not enough comfortable for them. The use of Yb-169 as an active source material is the way to resolve the shielding problem for PDR, as well as for HRD brachytherapy. The isotope Yb-169 has the average photon emission energy of 93 KeV and the half-life of 32 days. Compared to iridium and cobalt, this isotope has a significantly lower emission energy and therefore requires a much lighter shielding. Moreover, the absorption cross section of different materials has a strong Z-dependence in that photon energy range. For example, the dose distributions of iridium and ytterbium have a quite similar behavior in the water or in the body. But the heavier material as lead absorbs the ytterbium radiation much stronger than the iridium or cobalt radiation. For example, only 2 mm of lead layer is enough to reduce the ytterbium radiation by a couple of orders of magnitude but is not enough to protect from iridium radiation. We have created an original facility to produce the start stable isotope Yb-168 using the laser technology AVLIS. This facility allows to raise the Yb-168 concentration up to 50 % and consumes much less of electrical power than the alternative electromagnetic enrichment facilities. We also developed, in cooperation with the Institute of high pressure physics of RAS, a new technology for manufacturing high-density ceramic cores of ytterbium oxide. Ceramics density reaches the limit of the theoretical values: 9.1 g/cm3 for the cubic phase of ytterbium oxide and 10 g/cm3 for the monoclinic phase. Source cores from this ceramics have high mechanical characteristics and a glassy surface. The use of ceramics allows to increase the source activity with fixed external dimensions of sources.Keywords: brachytherapy, high, pulse dose rates, radionuclides for therapy, ytterbium sources
Procedia PDF Downloads 491777 Effects of Sensory Integration Techniques in Science Education of Autistic Students
Authors: Joanna Estkowska
Abstract:
Sensory integration methods are very useful and improve daily functioning autistic and mentally disabled children. Autism is a neurobiological disorder that impairs one's ability to communicate with and relate to others as well as their sensory system. Children with autism, even highly functioning kids, can find it difficult to process language with surrounding noise or smells. They are hypersensitive to things we can ignore such as sight, sounds and touch. Adolescents with highly functioning autism or Asperger Syndrome can study Science and Math but the social aspect is difficult for them. Nature science is an area of study that attracts many of these kids. It is a systematic field in which the children can focus on a small aspect. If you follow these rules you can come up with an expected result. Sensory integration program and systematic classroom observation are quantitative methods of measuring classroom functioning and behaviors from direct observations. These methods specify both the events and behaviors that are to be observed and how they are to be recorded. Our students with and without autism attended the lessons in the classroom of nature science in the school and in the laboratory of University of Science and Technology in Bydgoszcz. The aim of this study is investigation the effects of sensory integration methods in teaching to students with autism. They were observed during experimental lessons in the classroom and in the laboratory. Their physical characteristics, sensory dysfunction, and behavior in class were taken into consideration by comparing their similarities and differences. In the chemistry classroom, every autistic student is paired with a mentor from their school. In the laboratory, the children are expected to wear goggles, gloves and a lab coat. The chemistry classes in the laboratory were held for four hours with a lunch break, and according to the assistants, the children were engaged the whole time. In classroom of nature science, the students are encouraged to use the interactive exhibition of chemical, physical and mathematical models constructed by the author of this paper. Our students with and without autism attended the lessons in those laboratories. The teacher's goals are: to assist the child in inhibiting and modulating sensory information and support the child in processing a response to sensory stimulation.Keywords: autism spectrum disorder, science education, sensory integration techniques, student with special educational needs
Procedia PDF Downloads 192776 Development and Characterization of Expandable TPEs Compounds for Footwear Applications
Authors: Ana Elisa Ribeiro Costa, Sónia Daniela Ferreira Miranda, João Pedro De Carvalho Pereira, João Carlos Simões Bernardo
Abstract:
Elastomeric thermoplastics (TPEs) have been widely used in the footwear industry over the years. Recently this industry has been requesting materials that can combine lightweight and high abrasion resistance. Although there are blowing agents on the market to improve the lightweight, when these are incorporated into molten polymers during the extrusion or injection molding, it is necessary to have some specific processing conditions (e.g. effect of temperature and hydrodynamic stresses) to obtain good properties and acceptable surface appearance on the final products. Therefore, it is a great advantage for the compounder industry to acquire compounds that already include the blowing agents. In this way, they can be handled and processed under the same conditions as a conventional raw material. In this work, the expandable TPEs compounds, namely a TPU and a SEBS, with the incorporation of blowing agents, have been developed through a co-rotating modular twin-screw parallel extruder. Different blowing agents such as thermo-expandable microspheres and an azodicarbonamide were selected and different screw configurations and temperature profiles were evaluated since these parameters have a particular influence on the expansion inhibition of the blowing agents. Furthermore, percentages of incorporation were varied in order to investigate their influence on the final product properties. After the extrusion of these compounds, expansion was tested by the injection process. The mechanical and physical properties were characterized by different analytical methods like tensile, flexural and abrasive tests, determination of hardness and density measurement. Also, scanning electron microscopy (SEM) was performed. It was observed that it is possible to incorporate the blowing agents on the TPEs without their expansion on the extrusion process. Only with reprocessing (injection molding) did the expansion of the agents occur. These results are corroborated by SEM micrographs, which show a good distribution of blowing agents in the polymeric matrices. The other experimental results showed a good mechanical performance and its density decrease (30% for SEBS and 35% for TPU). This study suggested that it is possible to develop optimized compounds for footwear applications (e.g., sole shoes), which only will be able to expand during the injection process.Keywords: blowing agents, expandable thermoplastic elastomeric compounds, low density, footwear applications
Procedia PDF Downloads 208775 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding
Authors: Wenya Shu, Ilinca Stanciulescu
Abstract:
Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding
Procedia PDF Downloads 129774 12 Real Forensic Caseworks Solved by the DNA STR-Typing of Skeletal Remains Exposed to Extremely Environment Conditions without the Conventional Bone Pulverization Step
Authors: Chiara Della Rocca, Gavino Piras, Andrea Berti, Alessandro Mameli
Abstract:
DNA identification of human skeletal remains plays a valuable role in the forensic field, especially in missing persons and mass disaster investigations. Hard tissues, such as bones and teeth, represent a very common kind of samples analyzed in forensic laboratories because they are often the only biological materials remaining. However, the major limitation of using these compact samples relies on the extremely time–consuming and labor–intensive treatment of grinding them into powder before proceeding with the conventional DNA purification and extraction step. In this context, a DNA extraction assay called the TBone Ex kit (DNA Chip Research Inc.) was developed to digest bone chips without powdering. Here, we simultaneously analyzed bone and tooth samples that arrived at our police laboratory and belonged to 15 different forensic casework that occurred in Sardinia (Italy). A total of 27 samples were recovered from different scenarios and were exposed to extreme environmental factors, including sunlight, seawater, soil, fauna, vegetation, and high temperature and humidity. The TBone Ex kit was used prior to the EZ2 DNA extraction kit on the EZ2 Connect Fx instrument (Qiagen), and high-quality autosomal and Y-chromosome STRs profiles were obtained for the 80% of the caseworks in an extremely short time frame. This study provides additional support for the use of the TBone Ex kit for digesting bone fragments/whole teeth as an effective alternative to pulverization protocols. We empirically demonstrated the effectiveness of the kit in processing multiple bone samples simultaneously, largely simplifying the DNA extraction procedure and the good yield of recovered DNA for downstream genetic typing in highly compromised forensic real specimens. In conclusion, this study turns out to be extremely useful for forensic laboratories, to which the various actors of the criminal justice system – such as potential jury members, judges, defense attorneys, and prosecutors – required immediate feedback.Keywords: DNA, skeletal remains, bones, tbone ex kit, extreme conditions
Procedia PDF Downloads 46773 Automated Feature Extraction and Object-Based Detection from High-Resolution Aerial Photos Based on Machine Learning and Artificial Intelligence
Authors: Mohammed Al Sulaimani, Hamad Al Manhi
Abstract:
With the development of Remote Sensing technology, the resolution of optical Remote Sensing images has greatly improved, and images have become largely available. Numerous detectors have been developed for detecting different types of objects. In the past few years, Remote Sensing has benefited a lot from deep learning, particularly Deep Convolution Neural Networks (CNNs). Deep learning holds great promise to fulfill the challenging needs of Remote Sensing and solving various problems within different fields and applications. The use of Unmanned Aerial Systems in acquiring Aerial Photos has become highly used and preferred by most organizations to support their activities because of their high resolution and accuracy, which make the identification and detection of very small features much easier than Satellite Images. And this has opened an extreme era of Deep Learning in different applications not only in feature extraction and prediction but also in analysis. This work addresses the capacity of Machine Learning and Deep Learning in detecting and extracting Oil Leaks from Flowlines (Onshore) using High-Resolution Aerial Photos which have been acquired by UAS fixed with RGB Sensor to support early detection of these leaks and prevent the company from the leak’s losses and the most important thing environmental damage. Here, there are two different approaches and different methods of DL have been demonstrated. The first approach focuses on detecting the Oil Leaks from the RAW Aerial Photos (not processed) using a Deep Learning called Single Shoot Detector (SSD). The model draws bounding boxes around the leaks, and the results were extremely good. The second approach focuses on detecting the Oil Leaks from the Ortho-mosaiced Images (Georeferenced Images) by developing three Deep Learning Models using (MaskRCNN, U-Net and PSP-Net Classifier). Then, post-processing is performed to combine the results of these three Deep Learning Models to achieve a better detection result and improved accuracy. Although there is a relatively small amount of datasets available for training purposes, the Trained DL Models have shown good results in extracting the extent of the Oil Leaks and obtaining excellent and accurate detection.Keywords: GIS, remote sensing, oil leak detection, machine learning, aerial photos, unmanned aerial systems
Procedia PDF Downloads 34772 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 224771 The Role of Artificial Intelligence in Patent Claim Interpretation: Legal Challenges and Opportunities
Authors: Mandeep Saini
Abstract:
The rapid advancement of Artificial Intelligence (AI) is transforming various fields, including intellectual property law. This paper explores the emerging role of AI in interpreting patent claims, a critical and highly specialized area within intellectual property rights. Patent claims define the scope of legal protection granted to an invention, and their precise interpretation is crucial in determining the boundaries of the patent holder's rights. Traditionally, this interpretation has relied heavily on the expertise of patent examiners, legal professionals, and judges. However, the increasing complexity of modern inventions, especially in fields like biotechnology, software, and electronics, poses significant challenges to human interpretation. Introducing AI into patent claim interpretation raises several legal and ethical concerns. This paper addresses critical issues such as the reliability of AI-driven interpretations, the potential for algorithmic bias, and the lack of transparency in AI decision-making processes. It considers the legal implications of relying on AI, particularly regarding accountability for errors and the potential challenges to AI interpretations in court. The paper includes a comparative study of AI-driven patent claim interpretations versus human interpretations across different jurisdictions to provide a comprehensive analysis. This comparison highlights the variations in legal standards and practices, offering insights into how AI could impact the harmonization of international patent laws. The paper proposes policy recommendations for the responsible use of AI in patent law. It suggests legal frameworks that ensure AI tools complement, rather than replace, human expertise in patent claim interpretation. These recommendations aim to balance the benefits of AI with the need for maintaining trust, transparency, and fairness in the legal process. By addressing these critical issues, this research contributes to the ongoing discourse on integrating AI into the legal field, specifically within intellectual property rights. It provides a forward-looking perspective on how AI could reshape patent law, offering both opportunities for innovation and challenges that must be carefully managed to protect the integrity of the legal system.Keywords: artificial intelligence (ai), patent claim interpretation, intellectual property rights, algorithmic bias, natural language processing, patent law harmonization, legal ethics
Procedia PDF Downloads 21770 The Impact of Electrospinning Parameters on Surface Morphology and Chemistry of PHBV Fibers
Authors: Lukasz Kaniuk, Mateusz M. Marzec, Andrzej Bernasik, Urszula Stachewicz
Abstract:
Electrospinning is one of the commonly used methods to produce micro- or nano-fibers. The properties of electrospun fibers allow them to be used to produce tissue scaffolds, biodegradable bandages, or purification membranes. The morphology of the obtained fibers depends on the composition of the polymer solution as well as the processing parameters. Interesting properties such as high fiber porosity can be achieved by changing humidity during electrospinning. Moreover, by changing voltage polarity in electrospinning, we are able to alternate functional groups at the surface of fibers. In this study, electrospun fibers were made of natural, thermoplastic polyester – PHBV (poly(3-hydroxybutyric acid-co-3-hydrovaleric acid). The fibrous mats were obtained using both positive and negative voltage polarities, and their surface was characterized using X-ray photoelectron spectroscopy (XPS, Ulvac-Phi, Chigasaki, Japan). Furthermore, the effect of the humidity on surface morphology was investigated using scanning electron microscopy (SEM, Merlin Gemini II, Zeiss, Germany). Electrospun PHBV fibers produced with positive and negative voltage polarity had similar morphology and the average fiber diameter, 2.47 ± 0.21 µm and 2.44 ± 0.15 µm, respectively. The change of the voltage polarity had a significant impact on the reorientation of the carbonyl groups what consequently changed the surface potential of the electrospun PHBV fibers. The increase of humidity during electrospinning causes porosity in the surface structure of the fibers. In conclusion, we showed within our studies that the process parameters such as humidity and voltage polarity have a great influence on fiber morphology and chemistry, changing their functionality. Surface properties of polymer fiber have a significant impact on cell integration and attachment, which is very important in tissue engineering. The possibility of changing surface porosity allows the use of fibers in various tissue engineering and drug delivery systems. Acknowledgment: This study was conducted within 'Nanofiber-based sponges for atopic skin treatment' project., carried out within the First TEAM programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund, project no POIR.04.04.00-00- 4571/18-00.Keywords: cells integration, electrospun fiber, PHBV, surface characterization
Procedia PDF Downloads 118769 A Computational Fluid Dynamics Simulation of Single Rod Bundles with 54 Fuel Rods without Spacers
Authors: S. K. Verma, S. L. Sinha, D. K. Chandraker
Abstract:
The Advanced Heavy Water Reactor (AHWR) is a vertical pressure tube type, heavy water moderated and boiling light water cooled natural circulation based reactor. The fuel bundle of AHWR contains 54 fuel rods arranged in three concentric rings of 12, 18 and 24 fuel rods. This fuel bundle is divided into a number of imaginary interacting flow passage called subchannels. Single phase flow condition exists in reactor rod bundle during startup condition and up to certain length of rod bundle when it is operating at full power. Prediction of the thermal margin of the reactor during startup condition has necessitated the determination of the turbulent mixing rate of coolant amongst these subchannels. Thus, it is vital to evaluate turbulent mixing between subchannels of AHWR rod bundle. With the remarkable progress in the computer processing power, the computational fluid dynamics (CFD) methodology can be useful for investigating the thermal–hydraulic characteristics phenomena in the nuclear fuel assembly. The present report covers the results of simulation of pressure drop, velocity variation and turbulence intensity on single rod bundle with 54 rods in circular arrays. In this investigation, 54-rod assemblies are simulated with ANSYS Fluent 15 using steady simulations with an ANSYS Workbench meshing. The simulations have been carried out with water for Reynolds number 9861.83. The rod bundle has a mean flow area of 4853.0584 mm2 in the bare region with the hydraulic diameter of 8.105 mm. In present investigation, a benchmark k-ε model has been used as a turbulence model and the symmetry condition is set as boundary conditions. Simulation are carried out to determine the turbulent mixing rate in the simulated subchannels of the reactor. The size of rod and the pitch in the test has been same as that of actual rod bundle in the prototype. Water has been used as the working fluid and the turbulent mixing tests have been carried out at atmospheric condition without heat addition. The mean velocity in the subchannel has been varied from 0-1.2 m/s. The flow conditions are found to be closer to the actual reactor condition.Keywords: AHWR, CFD, single-phase turbulent mixing rate, thermal–hydraulic
Procedia PDF Downloads 320768 Comfort Sensor Using Fuzzy Logic and Arduino
Authors: Samuel John, S. Sharanya
Abstract:
Automation has become an important part of our life. It has been used to control home entertainment systems, changing the ambience of rooms for different events etc. One of the main parameters to control in a smart home is the atmospheric comfort. Atmospheric comfort mainly includes temperature and relative humidity. In homes, the desired temperature of different rooms varies from 20 °C to 25 °C and relative humidity is around 50%. However, it varies widely. Hence, automated measurement of these parameters to ensure comfort assumes significance. To achieve this, a fuzzy logic controller using Arduino was developed using MATLAB. Arduino is an open source hardware consisting of a 24 pin ATMEGA chip (atmega328), 14 digital input /output pins and an inbuilt ADC. It runs on 5v and 3.3v power supported by a board voltage regulator. Some of the digital pins in Aruduino provide PWM (pulse width modulation) signals, which can be used in different applications. The Arduino platform provides an integrated development environment, which includes support for c, c++ and java programming languages. In the present work, soft sensor was introduced in this system that can indirectly measure temperature and humidity and can be used for processing several measurements these to ensure comfort. The Sugeno method (output variables are functions or singleton/constant, more suitable for implementing on microcontrollers) was used in the soft sensor in MATLAB and then interfaced to the Arduino, which is again interfaced to the temperature and humidity sensor DHT11. The temperature-humidity sensor DHT11 acts as the sensing element in this system. Further, a capacitive humidity sensor and a thermistor were also used to support the measurement of temperature and relative humidity of the surrounding to provide a digital signal on the data pin. The comfort sensor developed was able to measure temperature and relative humidity correctly. The comfort percentage was calculated and accordingly the temperature in the room was controlled. This system was placed in different rooms of the house to ensure that it modifies the comfort values depending on temperature and relative humidity of the environment. Compared to the existing comfort control sensors, this system was found to provide an accurate comfort percentage. Depending on the comfort percentage, the air conditioners and the coolers in the room were controlled. The main highlight of the project is its cost efficiency.Keywords: arduino, DHT11, soft sensor, sugeno
Procedia PDF Downloads 312767 Monitoring Memories by Using Brain Imaging
Authors: Deniz Erçelen, Özlem Selcuk Bozkurt
Abstract:
The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons
Procedia PDF Downloads 86766 Anodic Stability of Li₆PS₅Cl/PEO Composite Polymer Electrolytes for All-Solid-State Lithium Batteries: A First-Principles Molecular Dynamics Study
Authors: Hao-Wen Chang, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
All-solid-state lithium batteries (ASSLBs) are increasingly recognized as a safer and more reliable alternative to conventional lithium-ion batteries due to their non-flammable nature and enhanced safety performance. ASSLBs utilize a range of solid-state electrolytes, including solid polymer electrolytes (SPEs), inorganic solid electrolytes (ISEs), and composite polymer electrolytes (CPEs). SPEs are particularly valued for their flexibility, ease of processing, and excellent interfacial compatibility with electrodes, though their ionic conductivity remains a significant limitation. ISEs, on the other hand, provide high ionic conductivity, broad electrochemical windows, and strong mechanical properties but often face poor interfacial contact with electrodes, impeding performance. CPEs, which merge the strengths of SPEs and ISEs, represent a compelling solution for next-generation ASSLBs by addressing both electrochemical and mechanical challenges. Despite their potential, the mechanisms governing lithium-ion transport within these systems remain insufficiently understood. In this study, we designed CPEs based on argyrodite-type Li₆PS₅Cl (LPSC) combined with two distinct polymer matrices: poly(ethylene oxide) (PEO) with 24.5 wt% lithium bis(trifluoromethane)sulfonimide (LiTFSI) and polycaprolactone (PCL) with 25.7 wt% LiTFSI. Through density functional theory (DFT) calculations, we investigated the interfacial chemistry of these materials, revealing critical insights into their stability and interactions. Additionally, ab initio molecular dynamics (AIMD) simulations of lithium electrodes interfaced with LPSC layers containing polymers and LiTFSI demonstrated that the polymer matrix significantly mitigates LPSC decomposition, compared to systems with only a lithium electrode and LPSC layers. These findings underscore the pivotal role of CPEs in improving the performance and longevity of ASSLBs, offering a promising path forward for next-generation energy storage technologies.Keywords: all-solid-state lithium-ion batteries, composite solid electrolytes, DFT calculations, Li-ion transport
Procedia PDF Downloads 20765 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver
Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin
Abstract:
National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band
Procedia PDF Downloads 369764 An EEG-Based Scale for Comatose Patients' Vigilance State
Authors: Bechir Hbibi, Lamine Mili
Abstract:
Understanding the condition of comatose patients can be difficult, but it is crucial to their optimal treatment. Consequently, numerous scoring systems have been developed around the world to categorize patient states based on physiological assessments. Although validated and widely adopted by medical communities, these scores still present numerous limitations and obstacles. Even with the addition of additional tests and extensions, these scoring systems have not been able to overcome certain limitations, and it appears unlikely that they will be able to do so in the future. On the other hand, physiological tests are not the only way to extract ideas about comatose patients. EEG signal analysis has helped extensively to understand the human brain and human consciousness and has been used by researchers in the classification of different levels of disease. The use of EEG in the ICU has become an urgent matter in several cases and has been recommended by medical organizations. In this field, the EEG is used to investigate epilepsy, dementia, brain injuries, and many other neurological disorders. It has recently also been used to detect pain activity in some regions of the brain, for the detection of stress levels, and to evaluate sleep quality. In our recent findings, our aim was to use multifractal analysis, a very successful method of handling multifractal signals and feature extraction, to establish a state of awareness scale for comatose patients based on their electrical brain activity. The results show that this score could be instantaneous and could overcome many limitations with which the physiological scales stock. On the contrary, multifractal analysis stands out as a highly effective tool for characterizing non-stationary and self-similar signals. It demonstrates strong performance in extracting the properties of fractal and multifractal data, including signals and images. As such, we leverage this method, along with other features derived from EEG signal recordings from comatose patients, to develop a scale. This scale aims to accurately depict the vigilance state of patients in intensive care units and to address many of the limitations inherent in physiological scales such as the Glasgow Coma Scale (GCS) and the FOUR score. The results of applying version V0 of this approach to 30 patients with known GCS showed that the EEG-based score similarly describes the states of vigilance but distinguishes between the states of 8 sedated patients where the GCS could not be applied. Therefore, our approach could show promising results with patients with disabilities, injected with painkillers, and other categories where physiological scores could not be applied.Keywords: coma, vigilance state, EEG, multifractal analysis, feature extraction
Procedia PDF Downloads 68763 The Effect of Elapsed Time on the Cardiac Troponin-T Degradation and Its Utility as a Time Since Death Marker in Cases of Death Due to Burn
Authors: Sachil Kumar, Anoop K.Verma, Uma Shankar Singh
Abstract:
It’s extremely important to study postmortem interval in different causes of death since it assists in a great way in making an opinion on the exact cause of death following such incident often times. With diligent knowledge of the interval one could really say as an expert that the cause of death is not feigned hence there is a great need in evaluating such death to have been at the CRIME SCENE before performing an autopsy on such body. The approach described here is based on analyzing the degradation or proteolysis of a cardiac protein in cases of deaths due to burn as a marker of time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (Department of Forensic Medicine and Toxicology), King George’s Medical University, Lucknow India, after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC) for different time periods (~7.30, 18.20, 30.30, 41.20, 41.40, 54.30, 65.20, and 88.40 Hours). The cases included were the subjects of burn without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. As time postmortem progresses the intact cTnT band degrades to fragments that are easily detected by the monoclonal antibodies. A decreasing trend in the level of cTnT (% of intact) was found as the PM hours increased. A significant difference was observed between <15 h and other PM hours (p<0.01). Significant difference in cTnT level (% of intact) was also observed between 16-25 h and 56-65 h & >75 h (p<0.01). Western blot data clearly showed the intact protein at 42 kDa, three major (28 kDa, 30kDa, 10kDa) fragments, three additional minor fragments (12 kDa, 14kDa, and 15 kDa) and formation of low molecular weight fragments. Overall, both PMI and cardiac tissue of burned corpse had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 41.40 Hrs and after it intact protein slowly disappears. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the time postmortem. A strong significant positive correlation was found between cTnT and PM hours (r=0.87, p=0.0001). The regression analysis showed a good variability explained (R2=0.768) The post-mortem Troponin-T fragmentation observed in this study reveals a sequential, time-dependent process with the potential for use as a predictor of PMI in cases of burning.Keywords: burn, degradation, postmortem interval, troponin-T
Procedia PDF Downloads 451762 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City
Authors: Berhanu Keno Terfa
Abstract:
Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.
Procedia PDF Downloads 35761 The KAPSARC Energy Policy Database: Introducing a Quantified Library of China's Energy Policies
Authors: Philipp Galkin
Abstract:
Government policy is a critical factor in the understanding of energy markets. Regardless, it is rarely approached systematically from a research perspective. Gaining a precise understanding of what policies exist, their intended outcomes, geographical extent, duration, evolution, etc. would enable the research community to answer a variety of questions that, for now, are either oversimplified or ignored. Policy, on its surface, also seems a rather unstructured and qualitative undertaking. There may be quantitative components, but incorporating the concept of policy analysis into quantitative analysis remains a challenge. The KAPSARC Energy Policy Database (KEPD) is intended to address these two energy policy research limitations. Our approach is to represent policies within a quantitative library of the specific policy measures contained within a set of legal documents. Each of these measures is recorded into the database as a single entry characterized by a set of qualitative and quantitative attributes. Initially, we have focused on the major laws at the national level that regulate coal in China. However, KAPSARC is engaged in various efforts to apply this methodology to other energy policy domains. To ensure scalability and sustainability of our project, we are exploring semantic processing using automated computer algorithms. Automated coding can provide a more convenient input data for human coders and serve as a quality control option. Our initial findings suggest that the methodology utilized in KEPD could be applied to any set of energy policies. It also provides a convenient tool to facilitate understanding in the energy policy realm enabling the researcher to quickly identify, summarize, and digest policy documents and specific policy measures. The KEPD captures a wide range of information about each individual policy contained within a single policy document. This enables a variety of analyses, such as structural comparison of policy documents, tracing policy evolution, stakeholder analysis, and exploring interdependencies of policies and their attributes with exogenous datasets using statistical tools. The usability and broad range of research implications suggest a need for the continued expansion of the KEPD to encompass a larger scope of policy documents across geographies and energy sectors.Keywords: China, energy policy, policy analysis, policy database
Procedia PDF Downloads 323760 Convective Boiling of CO₂/R744 in Macro and Micro-Channels
Authors: Adonis Menezes, J. C. Passos
Abstract:
The current panorama of technology in heat transfer and the scarcity of information about the convective boiling of CO₂ and hydrocarbon in small diameter channels motivated the development of this work. Among non-halogenated refrigerants, CO₂/ R744 has distinct thermodynamic properties compared to other fluids. The R744 presents significant differences in operating pressures and temperatures, operating at higher values compared to other refrigerants, and this represents a challenge for the design of new evaporators, as the original systems must normally be resized to meet the specific characteristics of the R744, which creates the need for a new design and optimization criteria. To carry out the convective boiling tests of CO₂, an experimental apparatus capable of storing (m= 10kg) of saturated CO₂ at (T = -30 ° C) in an accumulator tank was used, later this fluid was pumped using a positive displacement pump with three pistons, and the outlet pressure was controlled and could reach up to (P = 110bar). This high-pressure saturated fluid passed through a Coriolis type flow meter, and the mass velocities varied between (G = 20 kg/m².s) up to (G = 1000 kg/m².s). After that, the fluid was sent to the first test section of circular cross-section in diameter (D = 4.57mm), where the inlet and outlet temperatures and pressures, were controlled and the heating was promoted by the Joule effect using a source of direct current with a maximum heat flow of (q = 100 kW/m²). The second test section used a cross-section with multi-channels (seven parallel channels) with a square cross-section of (D = 2mm) each; this second test section has also control of temperature and pressure at the inlet and outlet as well as for heating a direct current source was used, with a maximum heat flow of (q = 20 kW/m²). The fluid in a biphasic situation was directed to a parallel plate heat exchanger so that it returns to the liquid state, thus being able to return to the accumulator tank, continuing the cycle. The multi-channel test section has a viewing section; a high-speed CMOS camera was used for image acquisition, where it was possible to view the flow patterns. The experiments carried out and presented in this report were conducted in a rigorous manner, enabling the development of a database on the convective boiling of the R744 in macro and micro channels. The analysis prioritized the processes from the beginning of the convective boiling until the drying of the wall in a subcritical regime. The R744 resurfaces as an excellent alternative to chlorofluorocarbon refrigerants due to its negligible ODP (Ozone Depletion Potential) and GWP (Global Warming Potential) rates, among other advantages. The results found in the experimental tests were very promising for the use of CO₂ in micro-channels in convective boiling and served as a basis for determining the flow pattern map and correlation for determining the heat transfer coefficient in the convective boiling of CO₂.Keywords: convective boiling, CO₂/R744, macro-channels, micro-channels
Procedia PDF Downloads 143759 Nonlinear Evolution of the Pulses of Elastic Waves in Geological Materials
Authors: Elena B. Cherepetskaya, Alexander A. Karabutov, Natalia B. Podymova, Ivan Sas
Abstract:
Nonlinear evolution of broadband ultrasonic pulses passed through the rock specimens is studied using the apparatus ‘GEOSCAN-02M’. Ultrasonic pulses are excited by the pulses of Q-switched Nd:YAG laser with the time duration of 10 ns and with the energy of 260 mJ. This energy can be reduced to 20 mJ by some light filters. The laser beam radius did not exceed 5 mm. As a result of the absorption of the laser pulse in the special material – the optoacoustic generator–the pulses of longitudinal ultrasonic waves are excited with the time duration of 100 ns and with the maximum pressure amplitude of 10 MPa. The immersion technique is used to measure the parameters of these ultrasonic pulses passed through a specimen, the immersion liquid is distilled water. The reference pulse passed through the cell with water has the compression and the rarefaction phases. The amplitude of the rarefaction phase is five times lower than that of the compression phase. The spectral range of the reference pulse reaches 10 MHz. The cubic-shaped specimens of the Karelian gabbro are studied with the rib length 3 cm. The ultimate strength of the specimens by the uniaxial compression is (300±10) MPa. As the reference pulse passes through the area of the specimen without cracks the compression phase decreases and the rarefaction one increases due to diffraction and scattering of ultrasound, so the ratio of these phases becomes 2.3:1. After preloading some horizontal cracks appear in the specimens. Their location is found by one-sided scanning of the specimen using the backward mode detection of the ultrasonic pulses reflected from the structure defects. Using the computer processing of these signals the images are obtained of the cross-sections of the specimens with cracks. By the increase of the reference pulse amplitude from 0.1 MPa to 5 MPa the nonlinear transformation of the ultrasonic pulse passed through the specimen with horizontal cracks results in the decrease by 2.5 times of the amplitude of the rarefaction phase and in the increase of its duration by 2.1 times. By the increase of the reference pulse amplitude from 5 MPa to 10 MPa the time splitting of the phases is observed for the bipolar pulse passed through the specimen. The compression and rarefaction phases propagate with different velocities. These features of the powerful broadband ultrasonic pulses passed through the rock specimens can be described by the hysteresis model of Preisach-Mayergoyz and can be used for the location of cracks in the optically opaque materials.Keywords: cracks, geological materials, nonlinear evolution of ultrasonic pulses, rock
Procedia PDF Downloads 350758 Socially Sustainable Urban Rehabilitation Projects: Case Study of Ortahisar, Trabzon
Authors: Elif Berna Var
Abstract:
Cultural, physical, socio-economic, or politic changes occurred in urban areas might be resulted in the decaying period which may cause social problems. As a solution to that, urban renewal projects have been used in European countries since World War II whereas they have gained importance in Turkey after the 1980s. The first attempts were mostly related to physical or economic aspects which caused negative effects on social pattern later. Thus, social concerns have also started to include in renewal processes in developed countries. This integrative approach combining social, physical, and economic aspects promotes creating more sustainable neighbourhoods for both current and future generations. However, it is still a new subject for developing countries like Turkey. Concentrating on Trabzon-Turkey, this study highlights the importance of socially sustainable urban renewal processes especially in historical neighbourhoods where protecting the urban identity of the area is vital, as well as social structure, to create sustainable environments. Being in the historic city centre and having remarkable traditional houses, Ortahisar is an important image for Trabzon. Because of the fact that architectural and historical pattern of the area is still visible but need rehabilitations, it is preferred to use 'urban rehabilitation' as a way of urban renewal method for this study. A project is developed by the local government to create a secondary city centre and a new landmark for the city. But it is still ambiguous if this project can provide social sustainability of area which is one of the concerns of the research. In the study, it is suggested that social sustainability of an area can be achieved by several factors. In order to determine the factors affecting the social sustainability of an urban rehabilitation project, previous studies have been analysed and some common features are attempted to define. To achieve this, firstly, several analyses are conducted to find out social structure of Ortahisar. Secondly, structured interviews are implemented to 150 local people which aims to measure satisfaction level, awareness, the expectation of them, and to learn their demographical background in detail. Those data are used to define the critical factors for a more socially sustainable neighbourhood in Ortahisar. Later, the priority of those factors is asked to 50 experts and 150 local people to compare their attitudes and to find common criterias. According to the results, it can be said that social sustainability of Ortahisar neighbourhood can be improved by considering various factors like quality of urban areas, demographical factors, public participation, social cohesion and harmony, proprietorial factors, facilities of education and employment. In the end, several suggestions are made for Ortahisar case to promote more socially sustainable urban neighbourhood. As a pilot study highlighting the importance of social sustainability, it is hoped that this attempt might be the contributory effect on achieving more socially sustainable urban rehabilitation projects in Turkey.Keywords: urban rehabilitation, social sustainability, Trabzon, Turkey
Procedia PDF Downloads 374757 The Thinking of Dynamic Formulation of Rock Aging Agent Driven by Data
Authors: Longlong Zhang, Xiaohua Zhu, Ping Zhao, Yu Wang
Abstract:
The construction of mines, railways, highways, water conservancy projects, etc., have formed a large number of high steep slope wounds in China. Under the premise of slope stability and safety, the minimum cost, green and close to natural wound space repair, has become a new problem. Nowadays, in situ element testing and analysis, monitoring, field quantitative factor classification, and assignment evaluation will produce vast amounts of data. Data processing and analysis will inevitably differentiate the morphology, mineral composition, physicochemical properties between rock wounds, by which to dynamically match the appropriate techniques and materials for restoration. In the present research, based on the grid partition of the slope surface, tested the content of the combined oxide of rock mineral (SiO₂, CaO, MgO, Al₂O₃, Fe₃O₄, etc.), and classified and assigned values to the hardness and breakage of rock texture. The data of essential factors are interpolated and normalized in GIS, which formed the differential zoning map of slope space. According to the physical and chemical properties and spatial morphology of rocks in different zones, organic acids (plant waste fruit, fruit residue, etc.), natural mineral powder (zeolite, apatite, kaolin, etc.), water-retaining agent, and plant gum (melon powder) were mixed in different proportions to form rock aging agents. To spray the aging agent with different formulas on the slopes in different sections can affectively age the fresh rock wound, providing convenience for seed implantation, and reducing the transformation of heavy metals in the rocks. Through many practical engineering practices, a dynamic data platform of rock aging agent formula system is formed, which provides materials for the restoration of different slopes. It will also provide a guideline for the mixed-use of various natural materials to solve the complex, non-uniformity ecological restoration problem.Keywords: data-driven, dynamic state, high steep slope, rock aging agent, wounds
Procedia PDF Downloads 115756 Deep Convolutional Neural Network for Detection of Microaneurysms in Retinal Fundus Images at Early Stage
Authors: Goutam Kumar Ghorai, Sandip Sadhukhan, Arpita Sarkar, Debprasad Sinha, G. Sarkar, Ashis K. Dhara
Abstract:
Diabetes mellitus is one of the most common chronic diseases in all countries and continues to increase in numbers significantly. Diabetic retinopathy (DR) is damage to the retina that occurs with long-term diabetes. DR is a major cause of blindness in the Indian population. Therefore, its early diagnosis is of utmost importance towards preventing progression towards imminent irreversible loss of vision, particularly in the huge population across rural India. The barriers to eye examination of all diabetic patients are socioeconomic factors, lack of referrals, poor access to the healthcare system, lack of knowledge, insufficient number of ophthalmologists, and lack of networking between physicians, diabetologists and ophthalmologists. A few diabetic patients often visit a healthcare facility for their general checkup, but their eye condition remains largely undetected until the patient is symptomatic. This work aims to focus on the design and development of a fully automated intelligent decision system for screening retinal fundus images towards detection of the pathophysiology caused by microaneurysm in the early stage of the diseases. Automated detection of microaneurysm is a challenging problem due to the variation in color and the variation introduced by the field of view, inhomogeneous illumination, and pathological abnormalities. We have developed aconvolutional neural network for efficient detection of microaneurysm. A loss function is also developed to handle severe class imbalance due to very small size of microaneurysms compared to background. The network is able to locate the salient region containing microaneurysms in case of noisy images captured by non-mydriatic cameras. The ground truth of microaneurysms is created by expert ophthalmologists for MESSIDOR database as well as private database, collected from Indian patients. The network is trained from scratch using the fundus images of MESSIDOR database. The proposed method is evaluated on DIARETDB1 and the private database. The method is successful in detection of microaneurysms for dilated and non-dilated types of fundus images acquired from different medical centres. The proposed algorithm could be used for development of AI based affordable and accessible system, to provide service at grass root-level primary healthcare units spread across the country to cater to the need of the rural people unaware of the severe impact of DR.Keywords: retinal fundus image, deep convolutional neural network, early detection of microaneurysms, screening of diabetic retinopathy
Procedia PDF Downloads 142755 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images
Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso
Abstract:
Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence
Procedia PDF Downloads 19754 Wave State of Self: Findings of Synchronistic Patterns in the Collective Unconscious
Authors: R. Dimitri Halley
Abstract:
The research within Jungian Psychology presented here is on the wave state of Self. What has been discovered via shared dreaming, independently correlating dreams across dreamers, is beyond the Self stage into the deepest layer or the wave state Self: the very quantum ocean, the Self archetype is embedded in. A quantum wave or rhyming of meaning constituting synergy across several dreamers was discovered in dreams and in extensively shared dream work with small groups at a post therapy stage. Within the format of shared dreaming, we find synergy patterns beyond what Jung called the Self archetype. Jung led us up to the phase of Individuation and delivered the baton to Von Franz to work out the next synchronistic stage, here proposed as the finding of the quantum patterns making up the wave state of Self. These enfolded synchronistic patterns have been found in group format of shared dreaming of individuals approximating individuation, and the unfolding of it is carried by belief and faith. The reason for this format and operating system is because beyond therapy and of living reality, we find no science – no thinking or even awareness in the therapeutic sense – but rather a state of mental processing resembling more like that of spiritual attitude. Thinking as such is linear and cannot contain the deepest layer of Self, the quantum core of the human being. It is self reflection which is the container for the process at the wave state of Self. Observation locks us in an outside-in reactive flow from a first-person perspective and hence toward the surface we see to believe, whereas here, the direction of focus shifts to inside out/intrinsic. The operating system or language at the wave level of Self is thus belief and synchronicity. Belief has up to now been almost the sole province of organized religions but was viewed by Jung as an inherent property in the process of Individuation. The shared dreaming stage of the synchronistic patterns forms a larger story constituting a deep connectivity unfolding around individual Selves. Dreams of independent dreamers form larger patterns that come together as puzzles forming a larger story, and in this sense, this group work level builds on Jung as a post individuation collective stage. Shared dream correlations will be presented, illustrating a larger story in terms of trails of shared synchronicity.Keywords: belief, shared dreaming, synchronistic patterns, wave state of self
Procedia PDF Downloads 196753 Feasibility and Energy Efficiency Analysis of Chilled Water Radiant Cooling System of Office Apartment in Nigeria’s Tropical Climate City
Authors: Rasaq Adekunle Olabomi
Abstract:
More than 30% of the global building energy consumption is attributed to heating, ventilation and air-conditioning (HVAC) due to increasing urbanization and the need for more personal comfort. While heating is predominant in the temperate regions (especially during winter), comfort cooling is constantly needed in tropical regions such as Nigeria. This makes cooling a major contributor to the peak electrical load in the tropics. Meanwhile, the high solar energy availability in the tropical climate region presents a higher application potentials for solar thermal cooling systems; more so, the need for cooling mostly coincides with the solar energy availability. In addition to huge energy consumption, conventional (compressor type) air-conditioning systems mostly use refrigerants that are regarded as environmental unfriendly because of their ozone depletion potentials; this has made the alternative cooling systems to become popular in the present time. The better thermal capacity and less pumping power requirement of chilled water than chilled air has also made chilled water a preferred option over the chilled air cooling system. Radiant floor chilled water cooling is particularly is also considered suitable for spaces such as meeting room, seminar hall, auditorium, airport arrival and departure halls among others. This study did the analysis of the feasibility and energy efficiency of solar thermal chilled water for radiant flood cooling of an office apartment in a tropical climate city in Nigeria with a view to recommend its up-scaling. The analysis considered the weather parameters including available solar irradiance (kWh/m2-day) as well as the technical details of the solar thermal cooling systems to determine the feasibility. Project cost, its energy savings, emission reduction potentials and cost-to-benefits ration are used to analyze its energy efficiency as well as the viability of the cooling system. The techno-economic analysis of the proposed system, carried out using RETScreen software shows that its viability in but SWOT analysis of policy and institutional framework to promote solar energy utilization for the cooling systems shows weakness such as poor infrastructure and inadequate local capacity for technological development as major challenges.Keywords: cooling load, absorption cooling system, coefficient of performance, radiant floor, cost saving, emission reduction
Procedia PDF Downloads 26