Search results for: spectral moments
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1071

Search results for: spectral moments

201 Analyzing Land use change and its impacts on the Urban Environment in a Fast Growing Metropolitan City of Pakistan

Authors: Muhammad Nasar-u-Minallah, Dagmar Haase, Salman Qureshi

Abstract:

In a rapidly growing developing country cities are becoming more urbanized leading to modifications in urban climate. Rapid urbanization, especially unplanned urban land expansion, together with climate change has a profound impact on the urban settlement and urban thermal environment. Cities, particularly Pakistan are facing remarkably environmental issues and uneven development, and thus it is important to strengthen the investigation of urban environmental pressure brought by land-use changes and urbanization. The present study investigated the long term modification of the urban environment by urbanization utilizing Spatio-temporal dynamics of land-use change, urban population data, urban heat islands, monthly maximum, and minimum temperature of thirty years, multi remote sensing imageries, and spectral indices such as Normalized Difference Built-up Index and Normalized Difference Vegetation Index. The results indicate rapid growth in an urban built-up area and a reduction in vegetation cover in the last three decades (1990-2020). A positive correlation between urban heat islands and Normalized Difference Built-up Index, whereas a negative correlation between urban heat islands and the Normalized Difference Vegetation Index clearly shows how urbanization is affecting the local environment. The increase in air and land surface temperature temperatures is dangerous to human comfort. Practical approaches, such as increasing the urban green spaces and proper planning of the cities, have been suggested to help prevent further modification of the urban thermal environment by urbanization. The findings of this work are thus important for multi-sectorial use in the cities of Pakistan. By taking into consideration these results, the urban planners, decision-makers, and local government can make different policies to mitigate the urban land use impacts on the urban thermal environment in Pakistan.

Keywords: land use, urban environment, local climate, Lahore

Procedia PDF Downloads 82
200 Structural Properties of Surface Modified PVA: Zn97Pr3O Polymer Nanocomposite Free Standing Films

Authors: Pandiyarajan Thangaraj, Mangalaraja Ramalinga Viswanathan, Karthikeyan Balasubramanian, Héctor D. Mansilla, José Ruiz

Abstract:

Rare earth ions doped semiconductor nanostructures gained much attention due to their novel physical and chemical properties which lead to potential applications in laser technology as inexpensive luminescent materials. Doping of rare earth ions into ZnO semiconductor alter its electronic structure and emission properties. Surface modification (polymer covering) is one of the simplest techniques to modify the emission characteristics of host materials. The present work reports the synthesis and structural properties of PVA:Zn97Pr3O polymer nanocomposite free standing films. To prepare Pr3+ doped ZnO nanostructures and PVA:Zn97Pr3O polymer nanocomposite free standing films, the colloidal chemical and solution casting techniques were adopted, respectively. The formation of PVA:Zn97Pr3O films were confirmed through X-ray diffraction (XRD), absorption and Fourier transform infrared (FTIR) spectroscopy analyses. XRD measurements confirm the prepared materials are crystalline having hexagonal wurtzite structure. Polymer composite film exhibits the diffraction peaks of both PVA and ZnO structures. TEM images reveal the pure and Pr3+ doped ZnO nanostructures exhibit sheet like morphology. Optical absorption spectra show free excitonic absorption band of ZnO at 370 nm and, the PVA:Zn97Pr3O polymer film shows absorption bands at ~282 and 368 nm and these arise due to the presence of carbonyl containing structures connected to the PVA polymeric chains, mainly at the ends and free excitonic absorption of ZnO nanostructures, respectively. Transmission spectrum of as prepared film shows 57 to 69% of transparency in the visible and near IR region. FTIR spectral studies confirm the presence of A1 (TO) and E1 (TO) modes of Zn-O bond vibration and the formation of polymer composite materials.

Keywords: rare earth doped ZnO, polymer composites, structural characterization, surface modification

Procedia PDF Downloads 345
199 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 137
198 Liquefaction Potential Assessment Using Screw Driving Testing and Microtremor Data: A Case Study in the Philippines

Authors: Arturo Daag

Abstract:

The Philippine Institute of Volcanology and Seismology (PHIVOLCS) is enhancing its liquefaction hazard map towards a detailed probabilistic approach using SDS and geophysical data. Target sites for liquefaction assessment are public schools in Metro Manila. Since target sites are in highly urbanized-setting, the objective of the project is to conduct both non-destructive geotechnical studies using Screw Driving Testing (SDFS) combined with geophysical data such as refraction microtremor array (ReMi), 3 component microtremor Horizontal to Vertical Spectral Ratio (HVSR), and ground penetrating RADAR (GPR). Initial test data was conducted in liquefaction impacted areas from the Mw 6.1 earthquake in Central Luzon last April 22, 2019 Province of Pampanga. Numerous accounts of liquefaction events were documented areas underlain by quaternary alluvium and mostly covered by recent lahar deposits. SDS estimated values showed a good correlation to actual SPT values obtained from available borehole data. Thus, confirming that SDS can be an alternative tool for liquefaction assessment and more efficient in terms of cost and time compared to SPT and CPT. Conducting borehole may limit its access in highly urbanized areas. In order to extend or extrapolate the SPT borehole data, non-destructive geophysical equipment was used. A 3-component microtremor obtains a subsurface velocity model in 1-D seismic shear wave velocity of the upper 30 meters of the profile (Vs30). For the ReMi, 12 geophone array with 6 to 8-meter spacing surveys were conducted. Microtremor data were computed through the Factor of Safety, which is the quotient of Cyclic Resistance Ratio (CRR) and Cyclic Stress Ratio (CSR). Complementary GPR was used to study the subsurface structure and used to inferred subsurface structures and groundwater conditions.

Keywords: screw drive testing, microtremor, ground penetrating RADAR, liquefaction

Procedia PDF Downloads 176
197 System Identification of Building Structures with Continuous Modeling

Authors: Ruichong Zhang, Fadi Sawaged, Lotfi Gargab

Abstract:

This paper introduces a wave-based approach for system identification of high-rise building structures with a pair of seismic recordings, which can be used to evaluate structural integrity and detect damage in post-earthquake structural condition assessment. The fundamental of the approach is based on wave features of generalized impulse and frequency response functions (GIRF and GFRF), i.e., wave responses at one structural location to an impulsive motion at another reference location in time and frequency domains respectively. With a pair of seismic recordings at the two locations, GFRF is obtainable as Fourier spectral ratio of the two recordings, and GIRF is then found with the inverse Fourier transformation of GFRF. With an appropriate continuous model for the structure, a closed-form solution of GFRF, and subsequent GIRF, can also be found in terms of wave transmission and reflection coefficients, which are related to structural physical properties above the impulse location. Matching the two sets of GFRF and/or GIRF from recordings and the model helps identify structural parameters such as wave velocity or shear modulus. For illustration, this study examines ten-story Millikan Library in Pasadena, California with recordings of Yorba Linda earthquake of September 3, 2002. The building is modelled as piecewise continuous layers, with which GFRF is derived as function of such building parameters as impedance, cross-sectional area, and damping. GIRF can then be found in closed form for some special cases and numerically in general. Not only does this study reveal the influential factors of building parameters in wave features of GIRF and GRFR, it also shows some system-identification results, which are consistent with other vibration- and wave-based results. Finally, this paper discusses the effectiveness of the proposed model in system identification.

Keywords: wave-based approach, seismic responses of buildings, wave propagation in structures, construction

Procedia PDF Downloads 213
196 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products

Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto

Abstract:

An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.

Keywords: TDLAS, carbon dioxide, cups, headspace, measurement

Procedia PDF Downloads 297
195 Creative Mathematics – Action Research of a Professional Development Program in an Icelandic Compulsory School

Authors: Osk Dagsdottir

Abstract:

Background—Gait classifying allows clinicians to differentiate gait patterns into clinically important categories that help in clinical decision making. Reliable comparison of gait data between normal and patients requires knowledge of the gait parameters of normal children's specific age group. However, there is still a lack of the gait database for normal children of different ages. Objectives—This study aims to investigate the kinematics of the lower limb joints during gait for normal children in different age groups. Methods—Fifty-three normal children (34 boys, 19 girls) were recruited in this study. All the children were aged between 5 to 16 years old. Age groups were defined as three types: young child aged (5-7), child (8-11), and adolescent (12-16). When a participant agreed to take part in the project, their parents signed a consent form. Vicon® motion capture system was used to collect gait data. Participants were asked to walk at their comfortable speed along a 10-meter walkway. Each participant walked up to 20 trials. Three good trials were analyzed using the Vicon Plug-in-Gait model to obtain parameters of the gait, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Moreover, each gait cycle was divided into 8 phases. The range of motion (ROM) angle of pelvis, hip, knee, and ankle joints in three planes of both limbs were calculated using an in-house program. Results—The temporal-spatial variables of three age groups of normal children were compared between each other; it was found that there was a significant difference (p < 0.05) between the groups. The step length and walking speed were gradually increasing from young child to adolescent, while cadence was gradually decreasing from young child to adolescent group. The mean and standard deviation (SD) of the step length of young child, child and adolescent groups were 0.502 ± 0.067 m, 0.566 ± 0.061 m and 0.672 ± 0.053 m, respectively. The mean and SD of the cadence of the young child, child and adolescent groups were 140.11±15.79 step/min, 129±11.84 step/min, and a 115.96±6.47 step/min, respectively. Moreover, it was observed that there were significant differences in kinematic parameters, either whole gait cycle or each phase. For example, RoM of knee angle in the sagittal plane in the whole cycle of young child group is (65.03±0.52 deg) larger than child group (63.47±0.47 deg). Conclusion—Our result showed that there are significant differences between each age group in the gait phases and thus children walking performance changes with ages. Therefore, it is important for the clinician to consider the age group when analyzing the patients with lower limb disorders before any clinical treatment.

Keywords: action research, creative learning, mathematics education, professional development

Procedia PDF Downloads 91
194 A Comparative Analysis of Various Companding Techniques Used to Reduce PAPR in VLC Systems

Authors: Arushi Singh, Anjana Jain, Prakash Vyavahare

Abstract:

Recently, Li-Fi(light-fiedelity) has been launched based on VLC(visible light communication) technique, 100 times faster than WiFi. Now 5G mobile communication system is proposed to use VLC-OFDM as the transmission technique. The VLC system focused on visible rays, is considered for efficient spectrum use and easy intensity modulation through LEDs. The reason of high speed in VLC is LED, as they flicker incredibly fast(order of MHz). Another advantage of employing LED is-it acts as low pass filter results no out-of-band emission. The VLC system falls under the category of ‘green technology’ for utilizing LEDs. In present scenario, OFDM is used for high data-rates, interference immunity and high spectral efficiency. Inspite of the advantages OFDM suffers from large PAPR, ICI among carriers and frequency offset errors. Since, the data transmission technique used in VLC system is OFDM, the system suffers the drawbacks of OFDM as well as VLC, the non-linearity dues to non-linear characteristics of LED and PAPR of OFDM due to which the high power amplifier enters in non-linear region. The proposed paper focuses on reduction of PAPR in VLC-OFDM systems. Many techniques are applied to reduce PAPR such as-clipping-introduces distortion in the carrier; selective mapping technique-suffers wastage of bandwidth; partial transmit sequence-very complex due to exponentially increased number of sub-blocks. The paper discusses three companding techniques namely- µ-law, A-law and advance A-law companding technique. The analysis shows that the advance A-law companding techniques reduces the PAPR of the signal by adjusting the companding parameter within the range. VLC-OFDM systems are the future of the wireless communication but non-linearity in VLC-OFDM is a severe issue. The proposed paper discusses the techniques to reduce PAPR, one of the non-linearities of the system. The companding techniques mentioned in this paper provides better results without increasing the complexity of the system.

Keywords: non-linear companding techniques, peak to average power ratio (PAPR), visible light communication (VLC), VLC-OFDM

Procedia PDF Downloads 266
193 Urinary Incontinence and Performance in Elite Athletes

Authors: María Barbaño Acevedo Gómez, Elena Sonsoles Rodríguez López, Sofía Olivia Calvo Moreno, Ángel Basas García, Christophe RamíRez Parenteau

Abstract:

Introduction: Urinary incontinence (UI) is defined as the involuntary leakage of urine. In persons who practice sport, its prevalence is 36.1% (95% CI 26.5% –46.8%) and varies as it seems to depend on the intensity of exercise, movements and impact on the ground. Such high impact sports are likely to generate higher intra-abdominal pressures and leading to pelvic floor muscle weakness. Although physical exercise reduces the risk of suffering from many diseases the mentality of an elite athlete is not to optimize their health, achieving their goals can put their health at risk. Furthermore, feeling or suffering from any discomfort during training seems to be normal within the elite sport demands. Objective: The main objective of the present study was to know the effects of UI in sports performance in athletes. Methods: This was an observational study conducted in 754 elite athletes. After collecting questions about pelvic floor, UI and sport-related data, participants completed the questionnaire International Consultation on Incontinence Questionnaire-UI Short- Form (ICIQ-SF) and ISI (index of incontinence severity). Results: 48.8% of the athletes declare having losses also in rest, preseason and / or competition (χ2 [3] = 3.64; p = 0.302), being the competition period (29.1%) the most frequent where suffer from urine leakage. Of the elite athletes surveyed, 33% had UI according ICIQ-SF (mean age 23.75 ± 7.74 years). Elite athletes with UI (5.31 ± 1.07 days) dedicate significantly more days per week to training [M = 0.28; 95% CI = 0.08-0.48; t (752) = 2.78; p = 0.005] than those without UI. Regarding frequency, 59.7% lose urine once a week, 25.6% lose urine more than 3 times a week, and 14.7% daily. Based on the amount, approximately 15% claim to lose a moderate and abundant. Athletes with the highest number of urine leaks during their training, the UI affects them more in their daily life (r = 0.259; p = 0.001), they present a greater number of losses in their day to day (r = 0.341; p <0.001 ) and greater severity of UI (r = 0.341; p <0.001). Conclusions: Athletes consider that UI affects them negatively in their daily routine, 30.9% affirm having a severity between moderate and severe in their daily routine, and 29.1% loss urine in competition period. An interesting fact is that more than half of the samples collected were elite athletes who compete at the highest level (Olympic Games, World and European Championship), the dedication to sport occupies a big piece in their life. The most frequent period where athletes suffers urine leakage is in competition and there are many emotions that athletes manage to get their best performance, if we add urine losses in that moments it is possible that their performance could be affected.

Keywords: athletes, performance, prevalence, sport, training, urinary incontinence

Procedia PDF Downloads 108
192 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 58
191 Wharton's Jelly-Derived Mesenchymal Stem Cells Modulate Heart Rate Variability and Improve Baroreflex Sensitivity in Septic Rats

Authors: Cóndor C. José, Rodrigues E. Camila, Noronha L. Irene, Dos Santos Fernando, Irigoyen M. Claudia, Andrade Lúcia

Abstract:

Sepsis induces alterations in hemodynamics and autonomic nervous system (ASN). The autonomic activity can be calculated by measuring heart rate variability (HRV) that represents the complex interplay between ASN and cardiac pacemaker cells. Wharton’s jelly mesenchymal stem cells (WJ-MSCs) are known to express genes and secreted factors involved in neuroprotective and immunological effects, also to improve the survival in experimental septic animals. We hypothesized, that WJ-MSCs present an important role in the autonomic activity and in the hemodynamic effects in a cecal ligation and puncture (CLP) model of sepsis. Methods: We used flow cytometry to evaluate WJ-MSCs phenotypes. We divided Wistar rats into groups: sham (shamoperated); CLP; and CLP+MSC (106 WJ-MSCs, i.p., 6 h after CLP). At 24 h post-CLP, we recorded the systolic arterial pressure (SAP) and heart rate (HR) over 20 min. The spectral analysis of HR and SAP; also the spontaneous baroreflex sensitivity (measure by bradycardic and tachycardic responses) were evaluated after recording. The one-way ANOVA and the post hoc Student– Newman– Keuls tests (P< 0.05) were used to data comparison Results: WJ-MSCs were negative for CD3, CD34, CD45 and HLA-DR, whereas they were positive for CD73, CD90 and CD105. The CLP group showed a reduction in variance of overall variability and in high-frequency power of HR (heart parasympathetic activity); furthermore, there is a low-frequency reduction of SAP (blood vessels sympathetic activity). The treatment with WJ-MSCs improved the autonomic activity by increasing the high and lowfrequency power; and restore the baroreflex sensitive. Conclusions: WJ-MSCs attenuate the impairment of autonomic control of the heart and vessels and might therefore play a protective role in sepsis. (Supported by FAPESP).

Keywords: baroreflex response, heart rate variability, sepsis, wharton’s jelly-derived mesenchymal stem cells

Procedia PDF Downloads 278
190 A Concept in Addressing the Singularity of the Emerging Universe

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation

Procedia PDF Downloads 67
189 The Meaning Structures of Political Participation of Young Women: Preliminary Findings in a Practical Phenomenology Study

Authors: Amanda Aliende da Matta, Maria del Pilar Fogueiras Bertomeu, Valeria de Ormaechea Otalora, Maria Paz Sandin Esteban, Miriam Comet Donoso

Abstract:

This communication presents the preliminary emerging themes in a research on political participation of young women. The study follows a qualitative methodology; in particular, the applied hermeneutic phenomenological method, and the general objective of the research is to give an account of the experience of political participation as young women. The study participants are women aged 18 to 35 who have experience in political participation. The techniques of data collection are the descriptive story and the phenomenological interview. With respect to the first methodological steps, these have been: 1) collect and select stories of lived experience in political participation, 2) select descriptions of lived experience (DLEs) in political participation of the chosen stories, 3) to prepare phenomenological interviews from the selected DLEs, 4) to conduct phenomenological thematic analysis (PTA) of the DLEs. We have so far initiated the PTA on 5 vignettes. Hermeneutic phenomenology as a research approach is based on phenomenological philosophy and applied hermeneutics. Phenomenology is a descriptive philosophy of pure experience and essences, through which we seek to capture an experience at its origins without categorizing, interpreting or theorizing it. Hermeneutics, on the other hand, may be defined as a philosophical current that can be applied to data analysis. Max Van Manen wrote that hermeneutic phenomenology is a method of abstemious reflection on the basic structures of the lived experience of human existence. In hermeneutic phenomenology we focus, then, on the way we experience “things” in the first person, seeking to capture the world exactly as we experience it, not as we categorize or conceptualize it. In this study, the empirical methods used were: Lived experience description (written) and conversational interview. For these short stories, participants were asked: “What was your lived experience of participation in politics as a young woman? Can you tell me any stories or anecdotes that you think exemplify or typify your experience?”. The questions were accompanied by a list of guidelines for writing descriptive vignettes. And the analytical method was PTA. Among the provisional results, we found preliminary emerging themes, which could in the advance of the investigation result in meaning structures of political participation of young women. They are the following: - Complicity may be inherent/essential in political participation as a young woman; - Feelings may be essential/inherent in political participation as a young woman; - Hope may be essential in authentic political participation as a young woman; - Frustration may be essential in authentic political participation as a young woman; - Satisfaction may be essential in authentic political participation as a young woman; - There may be tension between individual/collective inherent/essential in political participation as a young woman; - Political participation as a young woman may include moments of public demonstration.

Keywords: applied hermeneutic phenomenology, hermeneutics, phenomenology, political participation

Procedia PDF Downloads 66
188 Radar Cross Section Modelling of Lossy Dielectrics

Authors: Ciara Pienaar, J. W. Odendaal, J. Joubert, J. C. Smit

Abstract:

Radar cross section (RCS) of dielectric objects play an important role in many applications, such as low observability technology development, drone detection, and monitoring as well as coastal surveillance. Various materials are used to construct the targets of interest such as metal, wood, composite materials, radar absorbent materials, and other dielectrics. Since simulated datasets are increasingly being used to supplement infield measurements, as it is more cost effective and a larger variety of targets can be simulated, it is important to have a high level of confidence in the predicted results. Confidence can be attained through validation. Various computational electromagnetic (CEM) methods are capable of predicting the RCS of dielectric targets. This study will extend previous studies by validating full-wave and asymptotic RCS simulations of dielectric targets with measured data. The paper will provide measured RCS data of a number of canonical dielectric targets exhibiting different material properties. As stated previously, these measurements are used to validate numerous CEM methods. The dielectric properties are accurately characterized to reduce the uncertainties in the simulations. Finally, an analysis of the sensitivity of oblique and normal incidence scattering predictions to material characteristics is also presented. In this paper, the ability of several CEM methods, including method of moments (MoM), and physical optics (PO), to calculate the RCS of dielectrics were validated with measured data. A few dielectrics, exhibiting different material properties, were selected and several canonical targets, such as flat plates and cylinders, were manufactured. The RCS of these dielectric targets were measured in a compact range at the University of Pretoria, South Africa, over a frequency range of 2 to 18 GHz and a 360° azimuth angle sweep. This study also investigated the effect of slight variations in the material properties on the calculated RCS results, by varying the material properties within a realistic tolerance range and comparing the calculated RCS results. Interesting measured and simulated results have been obtained. Large discrepancies were observed between the different methods as well as the measured data. It was also observed that the accuracy of the RCS data of the dielectrics can be frequency and angle dependent. The simulated RCS for some of these materials also exhibit high sensitivity to variations in the material properties. Comparison graphs between the measured and simulation RCS datasets will be presented and the validation thereof will be discussed. Finally, the effect that small tolerances in the material properties have on the calculated RCS results will be shown. Thus the importance of accurate dielectric material properties for validation purposes will be discussed.

Keywords: asymptotic, CEM, dielectric scattering, full-wave, measurements, radar cross section, validation

Procedia PDF Downloads 220
187 Prototyping Exercise for the Construction of an Ancestral Violentometer in Buenaventura, Valle Del Cauca

Authors: Mariana Calderón, Paola Montenegro, Diana Moreno

Abstract:

Through this study, it was possible to identify the different levels and types of violence, both individual and collective, experienced by women, girls, and the sexually diverse population of Buenaventura translated from the different tensions and threats against ancestrality and accounting for a social and political context of violence related to race and geopolitical location. These threats are related to: the stigma and oblivion imposed on practices and knowledge; the imposition of the hegemonic culture; the imposition of external customs as a way of erasing ancestrality; the singling out and persecution of those who practice it; the violence that the health system has exercised against ancestral knowledge and practices, especially in the case of midwives; the persecution of the Catholic religion against this knowledge and practices; the difficulties in maintaining the practices in the displacement from rural to urban areas; the use and control of ancestral knowledge and practices by the armed actors; the rejection and stigma exercised by the public forces; and finally, the murder of the wise women at the hands of the armed actors. This research made it possible to understand the importance of using tools such as the violence meter to support processes of resistance to violence against women, girls, and sexually diverse people; however, it is essential that these tools be adapted to the specific contexts of the people. In the analysis of violence, it was possible to identify that these not only affect women, girls, and sexually diverse people individually but also have collective effects that threaten the territory and the ancestral culture to which they belong. Ancestrality has been the object of violence, but at the same time, it has been the place from which resistance has been organized. The identification of the violence suffered by women, girls, and sexually diverse people is also an opportunity to make visible the forms of resistance of women and communities in the face of this violence. This study examines how women, girls, and sexually diverse people in Buenaventura have been exposed to sexism and racism, which historically have been translated into specific forms of violence, in addition to the other forms of violence already identified by the traditional models of the violentometer. A qualitative approach was used in the study. The study included the participation of more than 40 people and two women's organizations from Buenaventura. The participants came from both urban and rural areas of the municipality of Buenaventura and were over 15 years of age. The participation of such a diverse group allowed for the exchange of knowledge and experiences, particularly between younger and older people. The instrument used for the exercise was previously defined with the leaders of the organizations and consisted of four moments that referred to i) ancestry, ii) threats to ancestry, iii) identification of resistance and iv) construction of the ancestral violentometer.

Keywords: violence against women, intersectionality, sexual and reproductive rights, black communities

Procedia PDF Downloads 63
186 Europium Chelates as a Platform for Biosensing

Authors: Eiman A. Al-Enezi, Gin Jose, Sikha Saha, Paul Millner

Abstract:

Rare earth nanotechnology has gained a considerable amount of interest in the field of biosensing due to the unique luminescence properties of lanthanides. Chelating rare earth ions plays a significant role in biological labelling applications including medical diagnostics, due to their different excitation and emission wavelengths, variety of their spectral properties, sharp emission peaks and long fluorescence lifetimes. We aimed to develop a platform for biosensors based on Europium (Eu³⁺) chelates against biomarkers of cardiac injury (heart-type fatty acid binding protein; H-FABP3) and stroke (glial fibrillary acidic protein; GFAP). Additional novelty in this project is the use of synthetic binding proteins (Affimers), which could offer an excellent alternative targeting strategy to the existing antibodies. Anti-GFAP and anti-HFABP3 Affimer binders were modified to increase the number of carboxy functionalities. Europium nitrate then incubated with the modified Affimer. The luminescence characteristics of the Eu³⁺ complex with modified Affimers and antibodies against anti-GFAP and anti-HFABP3 were measured against different concentrations of the respective analytes on excitation wavelength of 395nm. Bovine serum albumin (BSA) was used as a control against the IgG/Affimer Eu³⁺ complexes. The emission spectrum of Eu³⁺ complex resulted in 5 emission peaks ranging between 550-750 nm with the highest intensity peaks were at 592 and 698 nm. The fluorescence intensity of Eu³⁺ chelates with the modified Affimer or antibodies increased significantly by 4-7 folder compared to the emission spectrum of Eu³⁺ complex. The fluorescence intensity of the Affimer complex was quenched proportionally with increased analyte concentration, but this did not occur with antibody complex. In contrast, the fluorescence intensity for Eu³⁺ complex increased slightly against increased concentration of BSA. These data demonstrate that modified Affimers Eu³⁺ complexes can function as nanobiosensors with potential diagnostic and analytical applications.

Keywords: lanthanides, europium, chelates, biosensors

Procedia PDF Downloads 503
185 The Observable Method for the Regularization of Shock-Interface Interactions

Authors: Teng Li, Kamran Mohseni

Abstract:

This paper presents an inviscid regularization technique that is capable of regularizing the shocks and sharp interfaces simultaneously in the shock-interface interaction simulations. The direct numerical simulation of flows involving shocks has been investigated for many years and a lot of numerical methods were developed to capture the shocks. However, most of these methods rely on the numerical dissipation to regularize the shocks. Moreover, in high Reynolds number flows, the nonlinear terms in hyperbolic Partial Differential Equations (PDE) dominates, constantly generating small scale features. This makes direct numerical simulation of shocks even harder. The same difficulty happens in two-phase flow with sharp interfaces where the nonlinear terms in the governing equations keep sharpening the interfaces to discontinuities. The main idea of the proposed technique is to average out the small scales that is below the resolution (observable scale) of the computational grid by filtering the convective velocity in the nonlinear terms in the governing PDE. This technique is named “observable method” and it results in a set of hyperbolic equations called observable equations, namely, observable Navier-Stokes or Euler equations. The observable method has been applied to the flow simulations involving shocks, turbulence, and two-phase flows, and the results are promising. In the current paper, the observable method is examined on the performance of regularizing shocks and interfaces at the same time in shock-interface interaction problems. Bubble-shock interactions and Richtmyer-Meshkov instability are particularly chosen to be studied. Observable Euler equations will be numerically solved with pseudo-spectral discretization in space and third order Total Variation Diminishing (TVD) Runge Kutta method in time. Results are presented and compared with existing publications. The interface acceleration and deformation and shock reflection are particularly examined.

Keywords: compressible flow simulation, inviscid regularization, Richtmyer-Meshkov instability, shock-bubble interactions.

Procedia PDF Downloads 330
184 Sorghum Grains Grading for Food, Feed, and Fuel Using NIR Spectroscopy

Authors: Irsa Ejaz, Siyang He, Wei Li, Naiyue Hu, Chaochen Tang, Songbo Li, Meng Li, Boubacar Diallo, Guanghui Xie, Kang Yu

Abstract:

Background: Near-infrared spectroscopy (NIR) is a non-destructive, fast, and low-cost method to measure the grain quality of different cereals. Previously reported NIR model calibrations using the whole grain spectra had moderate accuracy. Improved predictions are achievable by using the spectra of whole grains, when compared with the use of spectra collected from the flour samples. However, the feasibility for determining the critical biochemicals, related to the classifications for food, feed, and fuel products are not adequately investigated. Objectives: To evaluate the feasibility of using NIRS and the influence of four sample types (whole grains, flours, hulled grain flours, and hull-less grain flours) on the prediction of chemical components to improve the grain sorting efficiency for human food, animal feed, and biofuel. Methods: NIR was applied in this study to determine the eight biochemicals in four types of sorghum samples: hulled grain flours, hull-less grain flours, whole grains, and grain flours. A total of 20 hybrids of sorghum grains were selected from the two locations in China. Followed by NIR spectral and wet-chemically measured biochemical data, partial least squares regression (PLSR) was used to construct the prediction models. Results: The results showed that sorghum grain morphology and sample format affected the prediction of biochemicals. Using NIR data of grain flours generally improved the prediction compared with the use of NIR data of whole grains. In addition, using the spectra of whole grains enabled comparable predictions, which are recommended when a non-destructive and rapid analysis is required. Compared with the hulled grain flours, hull-less grain flours allowed for improved predictions for tannin, cellulose, and hemicellulose using NIR data. Conclusion: The established PLSR models could enable food, feed, and fuel producers to efficiently evaluate a large number of samples by predicting the required biochemical components in sorghum grains without destruction.

Keywords: FT-NIR, sorghum grains, biochemical composition, food, feed, fuel, PLSR

Procedia PDF Downloads 44
183 Quality Assessment of the Essential Oil from Eucalyptus globulus Labill of Blida (Algeria) Origin

Authors: M. A. Ferhat, M. N. Boukhatem, F. Chemat

Abstract:

Eucalyptus essential oil is extracted from Eucalyptus globulus of the Myrtaceae family and is also known as Tasmanian blue gum or blue gum. Despite the reputation earned by aromatic and medicinal plants of Algeria. The objectives of this study were: (i) the extraction of the essential oil from the leaves of Eucalyptus globulus Labill., Myrtaceae grown in Algeria, and the quantification of the yield thereof, (ii) the identification and quantification of the compounds in the essential oil obtained, and (iii) the determination of physical and chemical properties of EGEO. The chemical constituents of Eucalyptus globulus essential oil (EGEO) of Blida origin has not previously been investigated. Thus, the present study has been conducted for the determination of chemical constituents and different physico-chemical properties of the EGEO. Chemical composition of the EGEO, grown in Algeria, was analysed by Gas Chromatography-Mass Spectrometry. The chemical components were identified on the basis of Retention Time and comparing with mass spectral database of standard compounds. Relative amounts of detected compounds were calculated on the basis of GC peak areas. Fresh leaves of E. globulus on steam distillation yielded 0.96% (v/w) of essential oil whereas the analysis resulted in the identification of a total of 11 constituents, 1.8 cineole (85.8%), α-pinene (7.2%), and β-myrcene (1.5%) being the main components. Other notable compounds identified in the oil were β-pinene, limonene, α-phellandrene, γ-terpinene, linalool, pinocarveol, terpinen-4-ol, and α-terpineol. The physical properties such as specific gravity, refractive index and optical rotation and the chemical properties such as saponification value, acid number and iodine number of the EGEO were examined. The oil extracted has been analyzed to have 1.4602-1.4623 refractive index value, 0.918-0.919 specific gravity (sp.gr.), +9 - +10 optical rotation that satisfy the standards stipulated by European Pharmacopeia. All the physical and chemical parameters were in the range indicated by the ISO standards. Our findings will help to access the quality of the Eucalyptus oil which is important in the production of high value essential oils that will help to improve the economic condition of the community as well as the nation.

Keywords: chemical composition, essential oil, eucalyptol, gas chromatography

Procedia PDF Downloads 295
182 In vitro Characterization of Mice Bone Microstructural Changes by Low-Field and High-Field Nuclear Magnetic Resonance

Authors: Q. Ni, J. A. Serna, D. Holland, X. Wang

Abstract:

The objective of this study is to develop Nuclear Magnetic Resonance (NMR) techniques to enhance bone related research applied on normal and disuse (Biglycan knockout) mice bone in vitro by using both low-field and high-field NMR simultaneously. It is known that the total amplitude of T₂ relaxation envelopes, measured by the Carr-Purcell-Meiboom-Gill NMR spin echo train (CPMG), is a representation of the liquid phase inside the pores. Therefore, the NMR CPMG magnetization amplitude can be transferred to the volume of water after calibration with the NMR signal amplitude of the known volume of the selected water. In this study, the distribution of mobile water, porosity that can be determined by using low-field (20 MHz) CPMG relaxation technique, and the pore size distributions can be determined by a computational inversion relaxation method. It is also known that the total proton intensity of magnetization from the NMR free induction decay (FID) signal is due to the water present inside the pores (mobile water), the water that has undergone hydration with the bone (bound water), and the protons in the collagen and mineral matter (solid-like protons). Therefore, the components of total mobile and bound water within bone that can be determined by low-field NMR free induction decay technique. Furthermore, the bound water in solid phase (mineral and organic constituents), especially, the dominated component of calcium hydroxyapatite (Ca₁₀(OH)₂(PO₄)₆) can be determined by using high-field (400 MHz) magic angle spinning (MAS) NMR. With MAS technique reducing NMR spectral linewidth inhomogeneous broadening and susceptibility broadening of liquid-solid mix, in particular, we can conduct further research into the ¹H and ³¹P elements and environments of bone materials to identify the locations of bound water such as OH- group within minerals and bone architecture. We hypothesize that with low-field and high-field magic angle spinning NMR can provide a more complete interpretation of water distribution, particularly, in bound water, and these data are important to access bone quality and predict the mechanical behavior of bone.

Keywords: bone, mice bone, NMR, water in bone

Procedia PDF Downloads 156
181 Festivals and Weddings in India during Corona Pandemic

Authors: Arul Aram, Vishnu Priya, Monicka Karunanithi

Abstract:

In India, in particular, festivals are the occasions of celebrations. They create beautiful moments to cherish. Mostly, people pay a visit to their native places to celebrate with their loved ones. So are wedding celebrations. The Covid-19 pandemic came upon us unexpectedly, and to fight it, the festivals and weddings are celebrated unusually. Crowded places are deserted. Mass gatherings are avoided, changes and alterations are made in our rituals and celebrations. The warmth usually people have at their heart during any festival and wedding has disappeared. Some aspects of the celebrations become virtual/digital rather than real -- for instance, digital greetings/invitations, digital conduct of ceremonies by priests, YouTube worship, online/digital cash gifts, and digital audience for weddings. Each festival has different rituals which are followed with the divine nature in every family, but the pandemic warranted some compromises on the traditions. Likewise, a marriage is a beautiful bond between two families where a lot of traditional customs are followed. The wedding ceremonies are colorful and celebrations may extend for several days. People in India spend financial resources to prepare and celebrate weddings. The bride's and the groom's homes are fully decorated with colors, balloons and other decorations. The wedding rituals and celebrations vary by religion, region, preference and the resources of the groom, bride and their families. They can range from one day to multiple-days events. But the Covid-19 pandemic situation changes the mindset of people over ceremonies. This lockdown has affected those weddings and industries that support them and make the people postpone or at times advance without fanfare their 'big day.' People now adopt the protocols, guidelines and safety measures to reduce the risk and minimize the fear during celebrations. The study shall look into: how the pandemic shattered the expectations of people celebrating; problems faced economically by people/service providers who are benefited by the celebrations; and identify the alterations made in the rituals or the practices of our culture for the safety of families. The study shall employ questionnaires, interviews and visual ethnography to collect data. The study found that during a complete lockdown, people have not bought new clothes, sweets, or snacks, as they generally do before a pandemic. Almost all of them kept their celebrations low-key, and some did not celebrate at all. Digital media played a role in keeping the celebration alive, as people used it to wish their friends and families virtually. During partial unlock, the situation was under control, and people began to go out and see a few family and friends. They went shopping and bought new clothes and needs, but they did it while following safety precautions. There is also an equal percentage of people who shopped online. Although people continue to remain disappointed, they were less stressed up as life was returning to normal.

Keywords: covid-19, digital, festivals, India, wedding

Procedia PDF Downloads 165
180 Neural Network Mechanisms Underlying the Combination Sensitivity Property in the HVC of Songbirds

Authors: Zeina Merabi, Arij Dao

Abstract:

The temporal order of information processing in the brain is an important code in many acoustic signals, including speech, music, and animal vocalizations. Despite its significance, surprisingly little is known about its underlying cellular mechanisms and network manifestations. In the songbird telencephalic nucleus HVC, a subset of neurons shows temporal combination sensitivity (TCS). These neurons show a high temporal specificity, responding differently to distinct patterns of spectral elements and their combinations. HVC neuron types include basal-ganglia-projecting HVCX, forebrain-projecting HVCRA, and interneurons (HVC¬INT), each exhibiting distinct cellular, electrophysiological and functional properties. In this work, we develop conductance-based neural network models connecting the different classes of HVC neurons via different wiring scenarios, aiming to explore possible neural mechanisms that orchestrate the combination sensitivity property exhibited by HVCX, as well as replicating in vivo firing patterns observed when TCS neurons are presented with various auditory stimuli. The ionic and synaptic currents for each class of neurons that are presented in our networks and are based on pharmacological studies, rendering our networks biologically plausible. We present for the first time several realistic scenarios in which the different types of HVC neurons can interact to produce this behavior. The different networks highlight neural mechanisms that could potentially help to explain some aspects of combination sensitivity, including 1) interplay between inhibitory interneurons’ activity and the post inhibitory firing of the HVCX neurons enabled by T-type Ca2+ and H currents, 2) temporal summation of synaptic inputs at the TCS site of opposing signals that are time-and frequency- dependent, and 3) reciprocal inhibitory and excitatory loops as a potent mechanism to encode information over many milliseconds. The result is a plausible network model characterizing auditory processing in HVC. Our next step is to test the predictions of the model.

Keywords: combination sensitivity, songbirds, neural networks, spatiotemporal integration

Procedia PDF Downloads 43
179 Studying Second Language Learners' Language Behavior from Conversation Analysis Perspective

Authors: Yanyan Wang

Abstract:

This paper on second language teaching and learning uses conversation analysis (CA) approach and focuses on how second language learners of Chinese do repair when making clarification requests. In order to demonstrate their behavior in interaction, a comparison was made to study the differences between native speakers of Chinese with non-native speakers of Chinese. The significance of the research is to make second language teachers and learners aware of repair and how to seek clarification. Utilizing the methodology of CA, the research involved two sets of naturally occurring recordings, one of native speaker students and the other of non-native speaker students. Both sets of recording were telephone talks between students and teachers. There were 50 native speaker students and 50 non-native speaker students. From multiple listening to the recordings, the parts with repairs for clarification were selected for analysis which included the moments in the talk when students had problems in understanding or hearing the speaker and had to seek clarification. For example, ‘Sorry, I do not understand ‘and ‘Can you repeat the question? ‘were the parts as repair to make clarification requests. In the data, there were 43 such cases from native speaker students and 88 cases from non-native speaker students. The non-native speaker students were more likely to use repair to seek clarification. Analysis on how the students make clarification requests during their conversation was carried out by investigating how the students initiated problems and how the teachers repaired the problems. In CA term, it is called other-initiated self-repair (OISR), which refers to student-initiated teacher-repair in this research. The findings show that, in initiating repair, native speaker students pay more attention to mutual understanding (inter-subjectivity) while non-native speaker students, due to their lack of language proficiency, pay more attention to their status of knowledge (epistemic) switch. There are three major differences: 1, native Chinese students more often initiate closed-class OISR (seeking specific information in the request) such as repeating a word or phrases from the previous turn while non-native students more frequently initiate open-class OISR (not specifying clarification) such as ‘sorry, I don’t understand ‘. 2, native speakers’ clarification requests are treated by the teacher as understanding of the content while non-native learners’ clarification requests are treated by teacher as language proficiency problem. 3, native speakers don’t see repair as knowledge issue and there is no third position in the repair sequences to close repair while non-native learners take repair sequence as a time to adjust their knowledge. There is clear closing third position token such as ‘oh ‘ to close repair sequence so that the topic can go back. In conclusion, this paper uses conversation analysis approach to compare differences between native Chinese speakers and non-native Chinese learners in their ways of conducting repair when making clarification requests. The findings are useful in future Chinese language teaching and learning, especially in teaching pragmatics such as requests.

Keywords: conversation analysis (CA), clarification request, second language (L2), teaching implication

Procedia PDF Downloads 238
178 Photoprotective and Antigenotoxic Effects of a Mixture of Posoqueria latifolia Flower Extract and Kaempferol Against Ultraviolet B Radiation

Authors: Silvia Ximena Barrios, Diego Armando Villamizar Mantilla, Raquel Elvira Ocazionez, , Elena E. Stashenko, María Pilar Vinardell, Jorge Luis Fuentes

Abstract:

Introduction: Skin overexposure to solar radiation has been a serious public health concern, because of its potential carcinogenicity. Therefore, preventive protection strategies using photoprotective agents are critical to counteract the harmful effect of solar radiation. Plants may be a source of photoprotective compounds that inhibit cellular mutations involved in skin cancer initiation. This work evaluated the photoprotective and antigenotoxic effects against ultraviolet B (UVB) radiation of a mixture of Posoqueria latifolia flower extract and Kaempferol (MixPoKa). Methods: The photoprotective efficacy of MixPoka (Posoqueria latifolia flower extract 250 μg/ml and Kaempferol 349.5 μM) was evaluated using in vitro indices such as sun protection factor SPFᵢₙ ᵥᵢₜᵣₒ and critical wavelength (λc). The MixPoKa photostability (Eff) at human minimal erythema doses (MED), according to the Fitzpatrick skin scale, was also estimated. Cytotoxicity and genotoxicity/antigenotoxicity were studied in MRC5 human fibroblasts using the trypan blue exclusion and Comet assays, respectively. Kinetics of the genetic damage repair post irradiation in the presence and absence of the MixPoka, was also evaluated. Results: The MixPoka -UV absorbance spectrum was high across the spectral bands between 200 and 400 nm. The UVB photoprotection efficacy of MixPoka was high (SPFᵢₙ ᵥᵢₜᵣₒ = 25.70 ± 0.06), showed wide photoprotection spectrum (λc = 380 ± 0), and resulted photostable (Eff = 92.3–100.0%). The MixPoka was neither cytotoxic nor genotoxic in MRC5 human fibroblasts; but presented significant antigenotoxic effect against UVB radiation. Additionally, MixPoka stimulate DNA repair post-irradiation. The potential of this phytochemical mixture as sunscreen ingredients was discussed. Conclusion: MixPoka showed a significant antigenotoxic effect against UVB radiation and stimulated DNA repair after irradiation. MixPoka could be used as an ingredient in a sunscreen cream.

Keywords: flower extract, photoprotection, antigenotoxicity, cytotoxicity, genotoxicit

Procedia PDF Downloads 55
177 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 71
176 AI/ML Atmospheric Parameters Retrieval Using the “Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN)”

Authors: Thomas Monahan, Nicolas Gorius, Thanh Nguyen

Abstract:

Exoplanet atmospheric parameters retrieval is a complex, computationally intensive, inverse modeling problem in which an exoplanet’s atmospheric composition is extracted from an observed spectrum. Traditional Bayesian sampling methods require extensive time and computation, involving algorithms that compare large numbers of known atmospheric models to the input spectral data. Runtimes are directly proportional to the number of parameters under consideration. These increased power and runtime requirements are difficult to accommodate in space missions where model size, speed, and power consumption are of particular importance. The use of traditional Bayesian sampling methods, therefore, compromise model complexity or sampling accuracy. The Atmospheric Retrievals conditional Generative Adversarial Network (ARcGAN) is a deep convolutional generative adversarial network that improves on the previous model’s speed and accuracy. We demonstrate the efficacy of artificial intelligence to quickly and reliably predict atmospheric parameters and present it as a viable alternative to slow and computationally heavy Bayesian methods. In addition to its broad applicability across instruments and planetary types, ARcGAN has been designed to function on low power application-specific integrated circuits. The application of edge computing to atmospheric retrievals allows for real or near-real-time quantification of atmospheric constituents at the instrument level. Additionally, edge computing provides both high-performance and power-efficient computing for AI applications, both of which are critical for space missions. With the edge computing chip implementation, ArcGAN serves as a strong basis for the development of a similar machine-learning algorithm to reduce the downlinked data volume from the Compact Ultraviolet to Visible Imaging Spectrometer (CUVIS) onboard the DAVINCI mission to Venus.

Keywords: deep learning, generative adversarial network, edge computing, atmospheric parameters retrieval

Procedia PDF Downloads 152
175 Pattern Recognition Approach Based on Metabolite Profiling Using In vitro Cancer Cell Line

Authors: Amanina Iymia Jeffree, Reena Thriumani, Mohammad Iqbal Omar, Ammar Zakaria, Yumi Zuhanis Has-Yun Hashim, Ali Yeon Md Shakaff

Abstract:

Metabolite profiling is a strategy to be approached in the pattern recognition method focused on three types of cancer cell line that driving the most to death specifically lung, breast, and colon cancer. The purpose of this study was to discriminate the VOCs pattern among cancerous and control group based on metabolite profiling. The sampling was executed utilizing the cell culture technique. All culture flasks were incubated till 72 hours and data collection started after 24 hours. Every running sample took 24 minutes to be completed accordingly. The comparative metabolite patterns were identified by the implementation of headspace-solid phase micro-extraction (HS-SPME) sampling coupled with gas chromatography-mass spectrometry (GCMS). The optimizations of the main experimental variables such as oven temperature and time were evaluated by response surface methodology (RSM) to get the optimal condition. Volatiles were acknowledged through the National Institute of Standards and Technology (NIST) mass spectral database and retention time libraries. To improve the reliability of significance, it is of crucial importance to eliminate background noise which data from 3rd minutes to 17th minutes were selected for statistical analysis. Targeted metabolites, of which were annotated as known compounds with the peak area greater than 0.5 percent were highlighted and subsequently treated statistically. Volatiles produced contain hundreds to thousands of compounds; therefore, it will be optimized by chemometric analysis, such as principal component analysis (PCA) as a preliminary analysis before subjected to a pattern classifier for identification of VOC samples. The volatile organic compound profiling has shown to be significantly distinguished among cancerous and control group based on metabolite profiling.

Keywords: in vitro cancer cell line, metabolite profiling, pattern recognition, volatile organic compounds

Procedia PDF Downloads 345
174 Kinematical Analysis of Tai Chi Chuan Players during Gait and Balance Test and Implication in Rehabilitation Exercise

Authors: Bijad Alqahtani, Graham Arnold, Weijie Wang

Abstract:

Background—Tai Chi Chuan (TCC) is a type of traditional Chinese martial art and is considered a benefiting physical fitness. Advanced techniques of motion analysis have been routinely used in the clinical assessment. However, so far, little research has been done on the biomechanical assessment of TCC players in terms of gait and balance using motion analysis. Objectives—The aim of this study was to investigate whether TCC improves the lower limb conditions and balance ability using the state of the art motion analysis technologies, i.e. motion capture system, electromyography and force platform. Methods—Twenty TCC (9 male, 11 female) with age between (42-77) years old and weight (56.2-119 Kg), and eighteen Non-TCC participants (7 male, 11 female), weight (50-110 Kg) with age (43- 78) years old at the matched age as a control group were recruited in this study. Their gait and balance were collected using Vicon Nexus® to obtain the gait parameters, and kinematic parameters of hip, knee, and ankle joints in three planes of both limbs. Participants stood on force platforms to perform a single leg balance test. Then, they were asked to walk along a 10 m walkway at their comfortable speed. Participants performed 5 trials of single-leg balance for the dominant side. Also, the participants performed 3 trials of four square step balance and 10 trials of walking. From the recorded trials, three good ones were analyzed using the Vicon Plug-in-Gait model to obtain gait parameters, e.g. walking speed, cadence, stride length, and joint parameters, e.g. joint angle, force, moments, etc. Result— The temporal-spatial variables of TCC subjects were compared with the Non-TCC subjects, it was found that there was a significant difference (p < 0.05) between the groups. Moreover, it was observed that participants of TCC have significant differences in ankle, hip, and knee joints’ kinematics in the sagittal, coronal, and transverse planes such as ankle angle (19.90±19.54 deg) for TCC while (15.34±6.50 deg) for Non-TCC, and knee angle (14.96±6.40 deg) for TCC while (17.63±5.79 deg) for Non-TCC in the transverse plane. Also, the result showed that there was a significant difference between groups in the single-leg balance test, e.g. maintaining single leg stance time in the TCC participants showed longer duration (20.85±10.53 s) in compared to Non-TCC people group (13.39±8.78 s). While the result showed that there was no significant difference between groups in the four square step balance. Conclusion—Our result showed that there are significant differences between Tai Chi Chuan and Non-Tai Chi Chuan participants in the various aspects of gait analysis and balance test, as a consequence of these findings some of biomechanical parameters such as joints kinematics, gait parameters and single leg stance balance test, the Tai Chi Chuan could improve the lower limb conditions and could reduce a risk of fall for the elderly with ageing.

Keywords: gait analysis, kinematics, single leg stance, Tai Chi Chuan

Procedia PDF Downloads 110
173 Remediation of Dye Contaminated Wastewater Using N, Pd Co-Doped TiO₂ Photocatalyst Derived from Polyamidoamine Dendrimer G1 as Template

Authors: Sarre Nzaba, Bulelwa Ntsendwana, Bekkie Mamba, Alex Kuvarega

Abstract:

The discharge of azo dyes such as Brilliant black (BB) into the water bodies has carcinogenic and mutagenic effects on humankind and the ecosystem. Conventional water treatment techniques fail to degrade these dyes completely thereby posing more problems. Advanced oxidation processes (AOPs) are promising technologies in solving the problem. Anatase type nitrogen-platinum (N, Pt) co-doped TiO₂ photocatalysts were prepared by a modified sol-gel method using amine terminated polyamidoamine generation 1 (PG1) as a template and source of nitrogen. The resultant photocatalysts were characterized by X‐ray diffraction (XRD), scanning electron microscopy (SEM), transmission electron microscopy (TEM), X‐ray photoelectron spectroscopy (XPS), UV‐Vis diffuse reflectance spectroscopy, photoluminescence spectroscopy (PL), Fourier transform infrared spectroscopy (FTIR), Raman spectroscopy (RS), thermal gravimetric analysis (TGA). The results showed that the calcination atmosphere played an important role in the morphology, crystal structure, spectral absorption, oxygen vacancy concentration, and visible light photocatalytic performance of the catalysts. Anatase phase particles ranging between 9- 20 nm were also confirmed by TEM, SEM, and analysis. The origin of the visible light photocatalytic activity was attributed to both the elemental N and Pd dopants and the existence of oxygen vacancies. Co-doping imparted a shift in the visible region of the solar spectrum. The visible light photocatalytic activity of the samples was investigated by monitoring the photocatalytic degradation of brilliant black dye. Co-doped TiO₂ showed greater photocatalytic brilliant black degradation efficiency compared to singly doped N-TiO₂ or Pd-TiO₂ under visible light irradiation. The highest reaction rate constant of 3.132 x 10-2 min⁻¹ was observed for N, Pd co-doped TiO₂ (2% Pd). The results demonstrated that the N, Pd co-doped TiO₂ (2% Pd) sample could completely degrade the dye in 3 h, while the commercial TiO₂ showed the lowest dye degradation efficiency (52.66%).

Keywords: brilliant black, Co-doped TiO₂, polyamidoamine generation 1 (PAMAM G1), photodegradation

Procedia PDF Downloads 160
172 Detection of Temporal Change of Fishery and Island Activities by DNB and SAR on the South China Sea

Authors: I. Asanuma, T. Yamaguchi, J. Park, K. J. Mackin

Abstract:

Fishery lights on the surface could be detected by the Day and Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi-NPP). The DNB covers the spectral range of 500 to 900 nm and realized a higher sensitivity. The DNB has a difficulty of identification of fishing lights from lunar lights reflected by clouds, which affects observations for the half of the month. Fishery lights and lights of the surface are identified from lunar lights reflected by clouds by a method using the DNB and the infrared band, where the detection limits are defined as a function of the brightness temperature with a difference from the maximum temperature for each level of DNB radiance and with the contrast of DNB radiance against the background radiance. Fishery boats or structures on islands could be detected by the Synthetic Aperture Radar (SAR) on the polar orbit satellites using the reflected microwave by the surface reflecting targets. The SAR has a difficulty of tradeoff between spatial resolution and coverage while detecting the small targets like fishery boats. A distribution of fishery boats and island activities were detected by the scan-SAR narrow mode of Radarsat-2, which covers 300 km by 300 km with various combinations of polarizations. The fishing boats were detected as a single pixel of highly scattering targets with the scan-SAR narrow mode of which spatial resolution is 30 m. As the look angle dependent scattering signals exhibits the significant differences, the standard deviations of scattered signals for each look angles were taken into account as a threshold to identify the signal from fishing boats and structures on the island from background noise. It was difficult to validate the detected targets by DNB with SAR data because of time lag of observations for 6 hours between midnight by DNB and morning or evening by SAR. The temporal changes of island activities were detected as a change of mean intensity of DNB for circular area for a certain scale of activities. The increase of DNB mean intensity was corresponding to the beginning of dredging and the change of intensity indicated the ending of reclamation and following constructions of facilities.

Keywords: day night band, SAR, fishery, South China Sea

Procedia PDF Downloads 215