Search results for: non-linear dynamics features
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7766

Search results for: non-linear dynamics features

806 Modulation of Receptor-Activation Due to Hydrogen Bond Formation

Authors: Sourav Ray, Christoph Stein, Marcus Weber

Abstract:

A new class of drug candidates, initially derived from mathematical modeling of ligand-receptor interactions, activate the μ-opioid receptor (MOR) preferentially at acidic extracellular pH-levels, as present in injured tissues. This is of commercial interest because it may preclude the adverse effects of conventional MOR agonists like fentanyl, which include but are not limited to addiction, constipation, sedation, and apnea. Animal studies indicate the importance of taking the pH value of the chemical environment of MOR into account when designing new drugs. Hydrogen bonds (HBs) play a crucial role in stabilizing protein secondary structure and molecular interaction, such as ligand-protein interaction. These bonds may depend on the pH value of the chemical environment. For the MOR, antagonist naloxone and agonist [D-Ala2,N-Me-Phe4,Gly5-ol]-enkephalin (DAMGO) form HBs with ionizable residue HIS 297 at physiological pH to modulate signaling. However, such interactions were markedly reduced at acidic pH. Although fentanyl-induced signaling is also diminished at acidic pH, HBs with HIS 297 residue are not observed at either acidic or physiological pH for this strong agonist of the MOR. Molecular dynamics (MD) simulations can provide greater insight into the interaction between the ligand of interest and the HIS 297 residue. Amino acid protonation states are adjusted to the model difference in system acidity. Unbiased and unrestrained MD simulations were performed, with the ligand in the proximity of the HIS 297 residue. Ligand-receptor complexes were embedded in 1-palmitoyl-2-oleoyl-sn glycero-3-phosphatidylcholine (POPC) bilayer to mimic the membrane environment. The occurrence of HBs between the different ligands and the HIS 297 residue of MOR at acidic and physiological pH values were tracked across the various simulation trajectories. No HB formation was observed between fentanyl and HIS 297 residue at either acidic or physiological pH. Naloxone formed some HBs with HIS 297 at pH 5, but no such HBs were noted at pH 7. Interestingly, DAMGO displayed an opposite yet more pronounced HB formation trend compared to naloxone. Whereas a marginal number of HBs could be observed at even pH 5, HBs with HIS 297 were more stable and widely present at pH 7. The HB formation plays no and marginal role in the interaction of fentanyl and naloxone, respectively, with the HIS 297 residue of MOR. However, HBs play a significant role in the DAMGO and HIS 297 interaction. Post DAMGO administration, these HBs might be crucial for the remediation of opioid tolerance and restoration of opioid sensitivity. Although experimental studies concur with our observations regarding the influence of HB formation on the fentanyl and DAMGO interaction with HIS 297, the same could not be conclusively stated for naloxone. Therefore, some other supplementary interactions might be responsible for the modulation of the MOR activity by naloxone binding at pH 7 but not at pH 5. Further elucidation of the mechanism of naloxone action on the MOR could assist in the formulation of cost-effective naloxone-based treatment of opioid overdose or opioid-induced side effects.

Keywords: effect of system acidity, hydrogen bond formation, opioid action, receptor activation

Procedia PDF Downloads 176
805 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data

Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau

Abstract:

Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.

Keywords: calcium imaging, computer vision, neural activity, neural networks

Procedia PDF Downloads 83
804 Serological Evidence of Brucella spp, Coxiella burnetti, Chlamydophila abortus, and Toxoplasma gondii Infections in Sheep and Goat Herds in the United Arab Emirates

Authors: Nabeeha Hassan Abdel Jalil, Robert Barigye, Hamda Al Alawi, Afra Al Dhaheri, Fatma Graiban Al Muhairi, Maryam Al Khateri, Nouf Al Alalawi, Susan Olet, Khaja Mohteshamuddin, Ahmad Al Aiyan, Mohamed Elfatih Hamad

Abstract:

A serological survey was carried out to determine the seroprevalence of Brucella spp, Coxiella burnetii, Chlamydophila abortus, and Toxoplasma gondii in sheep and goat herds in the UAE. A total of 915 blood samples [n= 222, [sheep]; n= 215, [goats]) were collected from livestock farms in the Emirates of Abu Dhabi, Dubai, Sharjah and Ras Al-Khaimah (RAK). An additional 478 samples (n= 244, [sheep]; n= 234, (goats]) were collected from the Al Ain livestock central market and tested by indirect ELISA for pathogen-specific antibodies with the Brucella antibodies being further corroborated by the Rose-Bengal agglutination test. Seropositivity for the four pathogens is variably documented in sheep and goats from the study area. Respectively, the overall livestock farm prevalence rates for Brucella spp, C. burnetii, C. abortus, and T. gondii were 2.7%, 27.9%, 8.1%, and 16.7% for sheep, and 0.0%, 31.6%, 9.3%, and 5.1% for goats. Additionally, the seroprevalence rates Brucella spp, C. burnetii, C. abortus, and T. gondii in samples from the livestock market were 7.4%, 21.7%, 16.4%, and 7.0% for sheep, and 0.9%, 32.5%, 19.2%, and 11.1% for goats respectively. Overall, sheep had 12.59 more chances than goats of testing seropositive for Brucella spp (OR, 12.59 [95% CI 2.96-53.6]) but less likely to be positive for C. burnetii-antibodies (OR, 0.73 [95% CI 0.54-0.97]). Notably, the differences in the seroprevalence rates of C. abortus and T. gondii in sheep and goats were not statistically significant (p > 0.0500). The present data indicate that all the four study pathogens are present in sheep and goat populations in the UAE where coxiellosis is apparently the most seroprevalent followed by chlamydophilosis, toxoplasmosis, and brucellosis. While sheep from the livestock market were more likely than those from farms to be Brucella-seropositive than those, the overall exposure risk of C. burnetii appears to be greater for goats than sheep. As more animals from the livestock market were more likely to be seropositive to Chlamydophila spp, it is possible that under the UAE animal production conditions, at least, coxiellosis and chlamydophilosis are more likely to increase the culling rate of domesticated small ruminants than toxoplasmosis and brucellosis. While anecdotal reports have previously insinuated that brucellosis may be a significant animal health risk in the UAE, the present data suggest C. burnetii, C. abortus and T. gondii to be more significant pathogens of sheep and goats in the country. Despite this possibility, the extent to which these pathogens may nationally be contributing to reproductive failure in sheep and goat herds is not known and needs to be investigated. Potentially, these agents may also carry a potentially zoonotic risk that needs to be investigated in risk groups like farm workers, and slaughter house personnel. An ongoing study is evaluating the seroprevalence of bovine coxiellosis in the Emirate of Abu Dhabi and the data thereof will further elucidate on the broader epidemiological dynamics of the disease in the national herd.

Keywords: Brucella spp, Chlamydophila abortus, goat, sheep, Toxoplasma gondii, UAE

Procedia PDF Downloads 206
803 Eco-Friendly Silicone/Graphene-Based Nanocomposites as Superhydrophobic Antifouling Coatings

Authors: Mohamed S. Selim, Nesreen A. Fatthallah, Shimaa A. Higazy, Hekmat R. Madian, Sherif A. El-Safty, Mohamed A. Shenashen

Abstract:

After the 2003 prohibition on employing TBT-based antifouling coatings, polysiloxane antifouling nano-coatings have gained in popularity as environmentally friendly and cost-effective replacements. A series of non-toxic polydimethylsiloxane nanocomposites filled with nanosheets of graphene oxide (GO) decorated with magnetite nanospheres (GO-Fe₃O₄ nanospheres) were developed and cured via a catalytic hydrosilation method. Various GO-Fe₃O₄ hybrid concentrations were mixed with the silicone resin via solution casting technique to evaluate the structure–property connection. To generate GO nanosheets, a modified Hummers method was applied. A simple co-precipitation method was used to make spherical magnetite particles under inert nitrogen. Hybrid GO-Fe₃O₄ composite fillers were developed by a simple ultrasonication method. Superhydrophobic PDMS/GO-Fe₃O₄ nanocomposite surface with a micro/nano-roughness, reduced surface-free energy (SFE), high fouling release (FR) efficiency was achieved. The physical, mechanical, and anticorrosive features of the virgin and GO-Fe₃O₄ filled nanocomposites were investigated. The synergistic effects of GO-Fe₃O4 hybrid's well-dispersion on the water-repellency and surface topological roughness of the PDMS/GO-Fe₃O₄ nanopaints were extensively studied. The addition of the GO-Fe₃O₄ hybrid fillers till 1 wt.% could increase the coating's water contact angle (158°±2°), minimize its SFE to 12.06 mN/m, develop outstanding micro/nano-roughness, and improve its bulk mechanical and anticorrosion properties. Several microorganisms were employed for examining the fouling-resistance of the coated specimens for 1 month. Silicone coatings filled with 1 wt.% GO-Fe₃O₄ nanofiller showed the least biodegradability% among all the tested microorganisms. Whereas GO-Fe₃O4 with 5 wt.% nanofiller possessed the highest biodegradability% potency by all the microorganisms. We successfully developed non-toxic and low cost nanostructured FR composite coating with high antifouling-resistance, reproducible superhydrophobic character, and enhanced service-time for maritime navigation.

Keywords: silicone antifouling, environmentally friendly, nanocomposites, nanofillers, fouling repellency, hydrophobicity

Procedia PDF Downloads 115
802 Metalorganic Chemical Vapor Deposition Overgrowth on the Bragg Grating for Gallium Nitride Based Distributed Feedback Laser

Authors: Junze Li, M. Li

Abstract:

Laser diodes fabricated from the III-nitride material system are emerging solutions for the next generation telecommunication systems and optical clocks based on Ca at 397nm, Rb at 420.2nm and Yb at 398.9nm combined 556 nm. Most of the applications require single longitudinal optical mode lasers, with very narrow linewidth and compact size, such as communication systems and laser cooling. In this case, the GaN based distributed feedback (DFB) laser diode is one of the most effective candidates with gratings are known to operate with narrow spectra as well as high power and efficiency. Given the wavelength range, the period of the first-order diffraction grating is under 100 nm, and the realization of such gratings is technically difficult due to the narrow line width and the high quality nitride overgrowth based on the Bragg grating. Some groups have reported GaN DFB lasers with high order distributed feedback surface gratings, which avoids the overgrowth. However, generally the strength of coupling is lower than that with Bragg grating embedded into the waveguide within the GaN laser structure by two-step-epitaxy. Therefore, the overgrowth on the grating technology need to be studied and optimized. Here we propose to fabricate the fine step shape structure of first-order grating by the nanoimprint combined inductively coupled plasma (ICP) dry etching, then carry out overgrowth high quality AlGaN film by metalorganic chemical vapor deposition (MOCVD). Then a series of gratings with different period, depths and duty ratios are designed and fabricated to study the influence of grating structure to the nano-heteroepitaxy. Moreover, we observe the nucleation and growth process by step-by-step growth to study the growth mode for nitride overgrowth on grating, under the condition that the grating period is larger than the mental migration length on the surface. The AFM images demonstrate that a smooth surface of AlGaN film is achieved with an average roughness of 0.20 nm over 3 × 3 μm2. The full width at half maximums (FWHMs) of the (002) reflections in the XRD rocking curves are 278 arcsec for the AlGaN film, and the component of the Al within the film is 8% according to the XRD mapping measurement, which is in accordance with design values. By observing the samples with growth time changing from 200s, 400s to 600s, the growth model is summarized as the follow steps: initially, the nucleation is evenly distributed on the grating structure, as the migration length of Al atoms is low; then, AlGaN growth alone with the grating top surface; finally, the AlGaN film formed by lateral growth. This work contributed to carrying out GaN DFB laser by fabricating grating and overgrowth on the nano-grating patterned substrate by wafer scale, moreover, growth dynamics had been analyzed as well.

Keywords: DFB laser, MOCVD, nanoepitaxy, III-niitride

Procedia PDF Downloads 191
801 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis

Authors: Inigo Beckett

Abstract:

In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.

Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs

Procedia PDF Downloads 53
800 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 36
799 Evaluation of Systemic Immune-Inflammation Index in Obese Children

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

A growing list of cancers might be influenced by obesity. Obesity is associated with an increased risk for the occurrence and development of some cancers. Inflammation can lead to cancer. It is one of the characteristic features of cancer and plays a critical role in cancer development. C-reactive protein (CRP) is under evaluation related to the new and simple prognostic factors in patients with metastatic renal cell cancer. Obesity can predict and promote systemic inflammation in healthy adults. BMI is correlated with hs-CRP. In this study, SII index and CRP values were evaluated in children with normal BMI and those within the range of different obesity grades to detect the tendency towards cancer in pediatric obesity. A total of one hundred and ninety-four children; thirty-five children with normal BMI, twenty overweight (OW), forty-seven obese (OB) and ninety-two morbid obese (MO) participated in the study. Age- and sex-matched groups were constituted using BMI-for age percentiles. Informed consent was obtained. Ethical Committee approval was taken. Weight, height, waist circumference (C), hip C, head C and neck C of the children were measured. The complete blood count test was performed. C-reactive protein analysis was performed. Statistical analyses were performed using SPSS. The degree for statistical significance was p≤0.05. SII index values were progressively increasing starting from normal weight (NW) to MO children. There is a statistically significant difference between NW and OB as well as MO children. No significant difference was observed between NW and OW children, however, a correlation was observed between NW and OW children. MO constitutes the only group, which exhibited a statistically significant correlation between SII index and CRP. Obesity-related bladder, kidney, cervical, liver, colorectal, endometrial cancers are still being investigated. Obesity, characterized as a chronic low-grade inflammation, is a crucial risk factor for colon cancer. Elevated childhood BMI values may be indicative of processes leading to cancer, initiated early in life. Prevention of childhood adiposity may decrease the cancer incidence in adults. To authors’ best knowledge, this study is the first to introduce SII index values during obesity of varying degrees of severity. It is suggested that this index seems to affect all stages of obesity with an increasing tendency and may point out the concomitant status of obesity and cancer starting from very early periods of life.

Keywords: children, C-reactive protein, systemic immune-inflammation index, obesity

Procedia PDF Downloads 179
798 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food

Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite

Abstract:

The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.

Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction

Procedia PDF Downloads 195
797 The Origin and Development of Entrepreneurial Cognition: The Impact of Entrepreneurship Education on Cognitive Style and Subsequent Entrepreneurial Intention

Authors: Salma Hussein, Hadia Aziz

Abstract:

Entrepreneurship plays a significant and imperative role in economic and social growth, and therefore, is stimulated and encouraged by governments and academics as a mean of creating job opportunities, innovation, and wealth. Indicative of its importance, it is essential to identify factors that encourage and promote entrepreneurial behavior. This is particularly true for developing countries where the need for entrepreneurial development is high and the resources are scarce, thus, there is a need to maximize the outcomes of investing in entrepreneurial development. Entrepreneurial education has been the center of attention and interest among researchers as it is believed to be one of the most critical factors in promoting entrepreneurship over the long run. Accordingly, the urgency to encourage entrepreneurship education and develop an enterprise culture is now a main concern in Egypt. Researchers have postulated that cognition has the potential to make a significant contribution to the study of entrepreneurship. One such contribution that future studies need to consider in entrepreneurship research is the cognitive processes that occur within the individual such as cognitive style. During the past decade, there has been an increasing interest in cognitive style among researchers and practitioners specifically in innovation and entrepreneurship field. Limited studies pay attention to study the antecedent dynamics that fuel entrepreneurial cognition to better understand its role in entrepreneurship. Moreover, while many studies were conducted on entrepreneurship education, scholars are still hesitant regarding the teachability of entrepreneurship due to the lack of clear evidence of its impact. Furthermore, the relation between cognitive style and entrepreneurial intentions, has yet to be discovered. Hence, this research aims to test the impact of entrepreneurship education on cognitive style and subsequent intention in order to evaluate whether student’s and potential entrepreneur’s cognitive styles are affected by entrepreneurial education and in turn affect their intentions. Understanding the impact of Entrepreneurship Education on ways of thinking and intention is critical for the development of effective education and training in entrepreneurship field. It is proposed that students who are exposed to entrepreneurship education programs will have a more balanced thinking style compared to those students who are not exposed. Moreover, it is hypothesized that students having a balanced cognitive style will exhibit higher levels of entrepreneurial intentions than students having an intuitive or analytical cognitive style. Finally, it is proposed that non-formal entrepreneurship education will be more positively associated with entrepreneurial intentions than will formal entrepreneurship education. The proposed methodology is a pre and post Experimental Design. The sample will include young adults, their age range from 18 till 35 years old including both students enrolled in formal entrepreneurship education programs in private universities as well as young adults who are willing to participate in a Non-Formal entrepreneurship education programs in Egypt. Attention is now given on how far individuals are analytical or intuitive in their cognitive style, to what extent it is possible to have a balanced thinking style and whether or not this can be aided by training or education. Therefore, there is an urge need for further research on entrepreneurial cognition in educational contexts.

Keywords: cognitive style, entrepreneurial intention, entrepreneurship education, experimental design

Procedia PDF Downloads 203
796 Self-Assembled ZnFeAl Layered Double Hydroxides as Highly Efficient Fenton-Like Catalysts

Authors: Marius Sebastian Secula, Mihaela Darie, Gabriela Carja

Abstract:

Ibuprofen is a non-steroidal anti-inflammatory drug (NSAIDs) and is among the most frequently detected pharmaceuticals in environmental samples and among the most widespread drug in the world. Its concentration in the environment is reported to be between 10 and 160 ng L-1. In order to improve the abatement efficiency of this compound for water source prevention and reclamation, the development of innovative technologies is mandatory. AOPs (advanced oxidation processes) are known as highly efficient towards the oxidation of organic pollutants. Among the promising combined treatments, photo-Fenton processes using layered double hydroxides (LDHs) attracted significant consideration especially due to their composition flexibility, high surface area and tailored redox features. This work presents the self-supported Fe, Mn or Ti on ZnFeAl LDHs obtained by co-precipitation followed by reconstruction method as novel efficient photo-catalysts for Fenton-like catalysis. Fe, Mn or Ti/ZnFeAl LDHs nano-hybrids were tested for the degradation of a model pharmaceutical agent, the anti-inflammatory agent ibuprofen, by photocatalysis and photo-Fenton catalysis, respectively, by means of a lab-scale system consisting of a batch reactor equipped with an UV lamp (17 W). The present study presents comparatively the degradation of Ibuprofen in aqueous solution UV light irradiation using four different types of LDHs. The newly prepared Ti/ZnFeAl 4:1 catalyst results in the best degradation performance. After 60 minutes of light irradiation, the Ibuprofen removal efficiency reaches 95%. The slowest degradation of Ibuprofen solution occurs in case of Fe/ZnFeAl 4:1 LDH, (67% removal efficiency after 60 minutes of process). Evolution of Ibuprofen degradation during the photo Fenton process is also studied using Ti/ZnFeAl 2:1 and 4:1 LDHs in the presence and absence of H2O2. It is found that after 60 min the use of Ti/ZnFeAl 4:1 LDH in presence of 100 mg/L H2O2 leads to the fastest degradation of Ibuprofen molecule. After 120 min, both catalysts Ti/ZnFeAl 4:1 and 2:1 result in the same value of removal efficiency (98%). In the absence of H2O2, Ibuprofen degradation reaches only 73% removal efficiency after 120 min of degradation process. Acknowledgements: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.

Keywords: layered double hydroxide, advanced oxidation process, micropollutant, heterogeneous Fenton

Procedia PDF Downloads 230
795 Doing Durable Organisational Identity Work in the Transforming World of Work: Meeting the Challenge of Different Workplace Strategies

Authors: Theo Heyns Veldsman, Dieter Veldsman

Abstract:

Organisational Identity (OI) refers to who and what the organisation is, what it stands for and does, and what it aspires to become. OI explores the perspectives of how we see ourselves, are seen by others and aspire to be seen. It provides as rationale the ‘why’ for the organisation’s continued existence. The most widely accepted differentiating features of OI are encapsulated in the organisation’s core, distinctive, differentiating, and enduring attributes. OI finds its concrete expression in the organisation’s Purpose, Vision, Strategy, Core Ideology, and Legacy. In the emerging new order infused by hyper-turbulence and hyper-fluidity, the VICCAS world, OI provides a secure anchor and steady reference point for the organisation, particularly the growing widespread focus on Purpose, which is indicative of the organisation’s sense of social citizenship. However, the transforming world of work (TWOW) - particularly the potent mix of ongoing disruptive innovation, the 4th Industrial Revolution, and the gig economy with the totally unpredicted COVID19 pandemic - has resulted in the consequential adoption of different workplace strategies by organisations in terms of how, where, and when work takes place. Different employment relations (transient to permanent); work locations (on-site to remote); work time arrangements (full-time at work to flexible work schedules); and technology enablement (face-to-face to virtual) now form the basis of the employer/employee relationship. The different workplace strategies, fueled by the demands of TWOW, pose a substantive challenge to organisations of doing durable OI work, able to fulfill OI’s critical attributes of core, distinctive, differentiating, and enduring. OI work is contained in the ongoing, reciprocally interdependent stages of sense-breaking, sense-giving, internalisation, enactment, and affirmation. The objective of our paper is to explore how to do durable OI work relative to different workplace strategies in the TWOW. Using a conceptual-theoretical approach from a practice-based orientation, the paper addresses the following topics: distinguishes different workplace strategies based upon a time/place continuum; explicates stage-wise the differential organisational content and process consequences of these strategies for durable OI work; indicates the critical success factors of durable OI work under these differential conditions; recommends guidelines for OI work relative to TWOW; and points out ethical implications of all of the above.

Keywords: organisational identity, workplace strategies, new world of work, durable organisational identity work

Procedia PDF Downloads 200
794 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms

Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson

Abstract:

This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.

Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection

Procedia PDF Downloads 465
793 Correlation between Body Mass Dynamics and Weaning in Eurasian Lynx (Lynx lynx L, 1758)

Authors: A. S. Fetisova, M. N. Erofeeva, G. S. Alekseeva, K. A. Volobueva, M. D. Kim, S. V. Naidenko

Abstract:

Weaning is characterized by the transition from milk to solid food. In some species, such changes in diet are fast and gradual in others. The reasons for the weaning start are understandable. Changes in milk composition and decrease in maternity behavior push cubs to search for additional sources of nutrients. In nature, females have many opportunities to wean offspring in case of a lack of resources. In contrast, in controlled conditions the possibility of delayed weaning exists. The delay of weaning can lead to overspending of maternal resources. In addition, the main causes of weaning end are not so obvious. Near the weaning end behavior of offspring depends on many factors: intensity of maternal behavior, reduction of milk abundance, brood size, physiological status, and body mass. During the pre-weaning period dynamic of body mass is strongly connected with milk intake. Based on that fact could body mass be one of the signals for end of milk feeding? It is known that some animals usually wean their offspring when juveniles achieved body mass in some proportion to the adult weight. In turn, we put forward the hypothesis that decrease in growth rates causes the delay of weaning in Eurasian lynxes (Lynx lynx). To explore the hypothesis, we compared the dynamic of body mass with duration of milk suckling. Firstly, to get information about duration of suckling we visually observed 8 lynx broods from 30 to 120 days postpartum. During each 4-hour observation we registered the start and the end of suckling acts and then calculate the total duration of this behavior. To get the dynamic of body mass kittens were weighed once a week. Duration of suckling varied from 3076,19 ± 1408,60 to 422,54 ± 285,38 seconds when body mass gain changed from 247,35 ± 26,49 to 289,41 ± 122,35 grams. Results of Kendall Tau correlation test (N= 96; p< 0,05) showed a negative correlation (τ= -0,36) between duration of suckling and body mass of lynx kittens. In general duration of suckling increases in response to decrease in body mass gain with slight delay. In early weaning from 30 to 58 days duration of suckling decreases gradually as does the body mass gain. During the weaning period the negative correlation between suckling time and body mass becomes tighter. Although throughout the weaning consumption of solid food begins to prevail over the milk intake, the correlation persists until the end of weaning (90-105 days) and after it. In that way weaning in Eurasian lynxes is not a part of ontogenesis controlled only by maternal behavior. It seems to be a flexible process influenced by various factors including changes in growth rates. It is necessary to continue investigations to determine the critical value of body mass which marks the safe moment to stop milk feeding. Understanding such details of ontogenesis is very important to organize procedures aimed at the reproduction of mammals ex situ and the conservation of endangered species.

Keywords: body mass, lynx, milk feeding, weaning

Procedia PDF Downloads 22
792 “I” on the Web: Social Penetration Theory Revised

Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology

Abstract:

The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.

Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information

Procedia PDF Downloads 374
791 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 145
790 The Diary of Dracula, by Marin Mincu: Inquiries into a Romanian 'Book of Wisdom' as a Fictional Counterpart for Corpus Hermeticum

Authors: Lucian Vasile Bagiu, Paraschiva Bagiu

Abstract:

The novel written in Italian and published in Italy in 1992 by the Romanian scholar Marin Mincu is meant for the foreign reader, aiming apparently at a better knowledge of the historical character of Vlad the Empalor (Vlad Dracul), within the European cultural, political and historical context of 1463. Throughout the very well written tome, one comes to realize that one of the underlining levels of the fiction is the exposing of various fundamental features of the Romanian culture and civilization. The author of the diary, Dracula, makes mention of Corpus Hermeticum no less than fifteen times, suggesting his own diary is some sort of a philosophical counterpart. The essay focuses on several ‘truths’ and ‘wisdom’ revealed in the fictional teachings of Dracula. The boycott of History by the Romanians is identified as an echo of the philosophical approach of the famous Romanian scholar and writer Lucian Blaga. The orality of the Romanian culture is a landmark opposed to written culture of the Western Europe. The religion of the ancient Dacian God Zalmoxis is seen as the basis for the Romanian existential and/or metaphysical ethnic philosophy (a feature tackled by the famous Romanian historian of religion Mircea Eliade), with a suggestion that Hermes Trismegistus may have written his Corpus Hermeticum being influenced by Zalmoxis. The historical figure of the last Dacian king Decebalus (death 106 AD) is a good pretext for a tantalizing Indo-European suggestion that the prehistoric Thraco-Dacian people may have been the ancestors of the first Romans settled in Latium. The lost diary of the Emperor Trajan The Bello Dacico may have proved that the unknown language of the Dacians was very much alike Latin language (a secret well hidden by the Vatican). The attitude towards death of the Dacians, as described by Herodotus, may have later inspired Pitagora, Socrates, the Eleusinian and Orphic Mysteries, etc. All of these within the Humanistic and Renascentist European context of the epoch, Dracula having a close relationship with scholars such as Nicolaus Cusanus, Cosimo de Medici, Marsilio Ficino, Pope Pius II, etc. Thus The Diary of Dracula turns out as exciting and stupefying as Corpus Hermeticum, a book impossible to assimilate entirely, yet a reference not wise to be ignored.

Keywords: Corpus Hermeticum, Dacians, Dracula, Zalmoxis

Procedia PDF Downloads 160
789 Phytochemical and Antimicrobial Properties of Zinc Oxide Nanocomposites on Multidrug-Resistant E. coli Enzyme: In-vitro and in-silico Studies

Authors: Callistus I. Iheme, Kenneth E. Asika, Emmanuel I. Ugwor, Chukwuka U. Ogbonna, Ugonna H. Uzoka, Nneamaka A. Chiegboka, Chinwe S. Alisi, Obinna S. Nwabueze, Amanda U. Ezirim, Judeanthony N. Ogbulie

Abstract:

Antimicrobial resistance (AMR) is a major threat to the global health sector. Zinc oxide nanocomposites (ZnONCs), composed of zinc oxide nanoparticles and phytochemicals from Azadirachta indica aqueous leaf extract, were assessed for their physico-chemicals, in silico and in vitro antimicrobial properties on multidrug-resistant Escherichia coli enzymes. Gas chromatography coupled with mass spectroscope (GC-MS) analysis on the ZnONCs revealed the presence of twenty volatile phytochemical compounds, among which is scoparone. Characterization of the ZnONCs was done using ultraviolet-visible spectroscopy (UV-vis), energy dispersive spectroscopy (EDX), transmission electron microscopy (TEM), scanning electron microscopy (SEM), and x-ray diffractometer (XRD). Dehydrogenase enzyme converts colorless 2,3,5-triphenyltetrazolium chloride to the red triphenyl formazan (TPF). The rate of formazan formation in the presence of ZnONCs is proportional to the enzyme activities. The color formation is extracted and determined at 500 nm, and the percentage of enzyme activity is calculated. To determine the bioactive components of the ZnONCs, characterize their binding to enzymes, and evaluate the enzyme-ligand complex stability, respectively Discrete Fourier Transform (DFT) analysis, docking, and molecular dynamics simulations will be employed. The results showed arrays of ZnONCs nanorods with maximal absorption wavelengths of 320 nm and 350 nm and thermally stable at the temperature range of 423.77 to 889.69 ℃. In vitro study assessed the dehydrogenase inhibitory properties of the ZnONCs, conjugate of ZnONCs and ampicillin (ZnONCs-amp), the aqueous leaf extract of A. indica, and ampicillin (standard drug). The findings revealed that at the concentration of 500 μm/mL, 57.89 % of the enzyme activities were inhibited by ZnONCs compared to 33.33% and 21.05% of the standard drug (Ampicillin), and the aqueous leaf extract of the A. indica respectively. The inhibition of the enzyme activities by the ZnONCs at 500 μm/mL was further enhanced to 89.74 % by conjugating with Ampicillin. In silico study on the ZnONCs revealed scoparone as the most viable competitor of nicotinamide adenine dinucleotide (NAD⁺) for the coenzyme binding pocket on E. coli malate and histidinol dehydrogenase. From the findings, it can be concluded that the scoparone components of the nanocomposites in synergy with the zinc oxide nanoparticles inhibited E. coli malate and histidinol dehydrogenase by competitively binding to the NAD⁺ pocket and that the conjugation of the ZnONCs with ampicillin further enhanced the antimicrobial efficiency of the nanocomposite against multidrug resistant E. coli.

Keywords: antimicrobial resistance, dehydrogenase activities, E. coli, zinc oxide nanocomposites

Procedia PDF Downloads 51
788 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis

Authors: Coriolano Salvini, Ambra Giovannelli

Abstract:

The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.

Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.

Procedia PDF Downloads 230
787 Intracranial Hypotension: A Brief Review of the Pathophysiology and Diagnostic Algorithm

Authors: Ana Bermudez de Castro Muela, Xiomara Santos Salas, Silvia Cayon Somacarrera

Abstract:

The aim of this review is to explain what is the intracranial hypotension and its main causes, and also to approach to the diagnostic management in the different clinical situations, understanding radiological findings, and physiopathological substrate. An approach to the diagnostic management is presented: what are the guidelines to follow, the different tests available, and the typical findings. We review the myelo-CT and myelo-RM studies in patients with suspected CSF fistula or hypotension of unknown cause during the last 10 years in three centers. Signs of intracranial hypotension (subdural hygromas/hematomas, pachymeningeal enhancement, venous sinus engorgement, pituitary hyperemia, and lowering of the brain) that are evident in baseline CT and MRI are also sought. The intracranial hypotension is defined as a lower opening pressure of 6 cmH₂O. It is a relatively rare disorder with an annual incidence of 5 per 100.000, with a female to male ratio 2:1. The clinical features it’s an orthostatic headache, which is defined as development or aggravation of headache when patients move from a supine to an upright position and disappear or typically relieve after lay down. The etiology is a decrease in the amount of cerebrospinal fluid (CSF), usually by loss of it, either spontaneous or secondary (post-traumatic, post-surgical, systemic disease, post-lumbar puncture etc.) and rhinorrhea and/or otorrhea may exist. The pathophysiological mechanisms of hypotension and CSF hypertension are interrelated, as a situation of hypertension may lead to hypotension secondary to spontaneous CSF leakage. The diagnostic management of intracranial hypotension in our center includes, in the case of being spontaneous and without rhinorrhea and/or otorrhea and according to necessity, a range of available tests, which will be performed from less to more complex: cerebral CT, cerebral MRI and spine without contrast and CT/MRI with intrathecal contrast. If we are in a situation of intracranial hypotension with the presence of rhinorrhea/otorrhea, a sample can be obtained for the detection of b2-transferrin, which is found in the CSF physiologically, as well as sinus CT and cerebral MRI including constructive interference steady state (CISS) sequences. If necessary, cisternography studies are performed to locate the exact point of leakage. It is important to emphasize the significance of myelo-CT / MRI to establish the diagnosis and location of CSF leak, which is indispensable for therapeutic planning (whether surgical or not) in patients with more than one lesion or doubts in the baseline tests.

Keywords: cerebrospinal fluid, neuroradiology brain, magnetic resonance imaging, fistula

Procedia PDF Downloads 127
786 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment

Authors: Ella Sèdé Maforikan

Abstract:

Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.

Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment

Procedia PDF Downloads 63
785 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data

Authors: Minjuan Sun

Abstract:

Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.

Keywords: credit score, digital footprint, Fintech, machine learning

Procedia PDF Downloads 165
784 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 405
783 Inherent Difficulties in Countering Islamophobia

Authors: Imbesat Daudi

Abstract:

Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.

Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam

Procedia PDF Downloads 48
782 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution

Authors: Dayane de Almeida

Abstract:

This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.

Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style

Procedia PDF Downloads 243
781 Metamorphosis of Caste: An Examination of the Transformation of Caste from a Material to Ideological Phenomenon in Sri Lanka

Authors: Pradeep Peiris, Hasini Lecamwasam

Abstract:

The fluid, ambiguous, and often elusive existence of caste among the Sinhalese in Sri Lanka has inspired many scholarly endeavours. Originally, Sinhalese caste was organized according to the occupational functions assigned to various groups in society. Hence cultivators came to be known as Goyigama, washers Dobi, drummers Berava, smiths Navandanna and so on. During pre-colonial times the specialized services of various groups were deployed to build water reservoirs, cultivate the land, and/or sustain the Buddhist order by material means. However, as to how and why caste prevails today in Sinhalese society when labour is in ideal terms free to move where it wants, or in other words, occupation is no longer strictly determined or restricted by birth, is a question worth exploring. Hence this paper explores how, and perhaps more interestingly why, when the nexus between traditional occupations and caste status is fast disappearing, caste itself has managed to survive and continues to be salient in politics in Sri Lanka. In answer to this larger question, the paper looks at caste from three perspectives: 1) Buddhism, whose ethical project provides a justification of social stratifications that transcends economic bases 2) Capitalism that has reactivated and reproduced archaic relations in a process of 'accumulation by subordination', not only by reinforcing the marginality of peripheral caste groups, but also by exploiting caste divisions to hinder any realization of class interests and 3) Democracy whose supposed equalizing effect expected through its ‘one man–one vote’ approach has been subverted precisely by itself, whereby the aggregate ultimately comes down to how many such votes each ‘group’ in society has. This study draws from field work carried out in Dedigama (in the District of Kegalle, Central Province) and Kelaniya (in the District of Colombo, Western Province) in Sri Lanka over three years. The choice of field locations was encouraged by the need to capture rural and urban dynamics related to caste since caste is more apparently manifest in rural areas whose material conditions partially warrant its prevalence, whereas in urban areas it exists mostly in the ideological terrain. In building its analysis, the study has employed a combination of objectivist and subjectivist approaches to capture the material and ideological existence of caste and caste politics in Sinhalese society. Therefore, methods such as in-depth interviews, observation, and collection of demographical and interpretive data from secondary sources were used for this study. The paper has been situated in a critical theoretical framework of social inquiry in an attempt to question dominant assumptions regarding such meta-labels as ‘Capitalism’ and ‘Democracy’, and also the supposed emancipatory function of religion (focusing on Buddhism).

Keywords: Buddhism, capitalism, caste, democracy, Sri Lanka

Procedia PDF Downloads 137
780 Biflavonoids from Selaginellaceae as Epidermal Growth Factor Receptor Inhibitors and Their Anticancer Properties

Authors: Adebisi Adunola Demehin, Wanlaya Thamnarak, Jaruwan Chatwichien, Chatchakorn Eurtivong, Kiattawee Choowongkomon, Somsak Ruchirawat, Nopporn Thasana

Abstract:

The epidermal growth factor receptor (EGFR) is a transmembrane glycoprotein involved in cellular signalling processes and, its aberrant activity is crucial in the development of many cancers such as lung cancer. Selaginellaceae are fern allies that have long been used in Chinese traditional medicine to treat various cancer types, especially lung cancer. Biflavonoids, the major secondary metabolites in Selaginellaceae, have numerous pharmacological activities, including anti-cancer and anti-inflammatory. For instance, amentoflavone induces a cytotoxic effect in the human NSCLC cell line via the inhibition of PARP-1. However, to the best of our knowledge, there are no studies on biflavonoids as EGFR inhibitors. Thus, this study aims to investigate the EGFR inhibitory activities of biflavonoids isolated from Selaginella siamensis and Selaginella bryopteris. Amentoflavone, tetrahydroamentoflavone, sciadopitysin, robustaflavone, robustaflavone-4-methylether, delicaflavone, and chrysocauloflavone were isolated from the ethyl-acetate extract of the whole plants. The structures were determined using NMR spectroscopy and mass spectrometry. In vitro study was conducted to evaluate their cytotoxicity against A549, HEPG2, and T47D human cancer cell lines using the MTT assay. In addition, a target-based assay was performed to investigate their EGFR inhibitory activity using the kinase inhibition assay. Finally, a molecular docking study was conducted to predict the binding modes of the compounds. Robustaflavone-4-methylether and delicaflavone showed the best cytotoxic activity on all the cell lines with IC50 (µM) values of 18.9 ± 2.1 and 22.7 ± 3.3 on A549, respectively. Of these biflavonoids, delicaflavone showed the most potent EGFR inhibitory activity with an 84% relative inhibition at 0.02 nM using erlotinib as a positive control. Robustaflavone-4-methylether showed a 78% inhibition at 0.15 nM. The docking scores obtained from the molecular docking study correlated with the kinase inhibition assay. Robustaflavone-4-methylether and delicaflavone had a docking score of 72.0 and 86.5, respectively. The inhibitory activity of delicaflavone seemed to be linked with the C2”=C3” and 3-O-4”’ linkage pattern. Thus, this study suggests that the structural features of these compounds could serve as a basis for developing new EGFR-TK inhibitors.

Keywords: anticancer, biflavonoids, EGFR, molecular docking, Selaginellaceae

Procedia PDF Downloads 198
779 Assessment of the Impact of Atmospheric Air, Drinking Water and Socio-Economic Indicators on the Primary Incidence of Children in Altai Krai

Authors: A. P. Pashkov

Abstract:

The number of environmental factors that adversely affect children's health is growing every year; their combination in each territory is different. The contribution of socio-economic factors to the health status of the younger generation is increasing. It is the child’s body that is most sensitive to changes in environmental conditions, responding to this with a deterioration in health. Over the past years, scientists have determined the influence of environmental factors and the incidence of children. Currently, there is a tendency to study regional characteristics of the interaction of a combination of environmental factors with the child's body. The aim of the work was to identify trends in the primary non-infectious morbidity of the children of the Altai Territory as a unique region that combines territories with different levels of environmental quality indicators, as well as to assess the effect of atmospheric air, drinking water and socio-economic indicators on the incidence of children in the region. An unfavorable tendency has been revealed in the region for incidence of such nosological groups as neoplasms, including malignant ones, diseases of the endocrine system, including obesity and thyroid disease, diseases of the circulatory system, digestive diseases, diseases of the genitourinary system, congenital anomalies, and respiratory diseases. Between some groups of diseases revealed a pattern of geographical distribution during mapping and a significant correlation. Some nosologies have a relationship with socio-economic indicators for an integrated assessment: circulatory system diseases, respiratory diseases (direct connection), endocrine system diseases, eating disorders, and metabolic disorders (feedback). The analysis of associations of the incidence of children with average annual concentrations of substances that pollute the air and drinking water showed the existence of reliable correlation in areas of critical and intense degree of environmental quality. This fact confirms that the population living in contaminated areas is subject to the negative influence of environmental factors, which immediately affects the health status of children. The results obtained indicate the need for a detailed assessment of the influence of environmental factors on the incidence of children in the regional aspect, the formation of a database, and the development of automated programs that can predict the incidence in each specific territory. This will increase the effectiveness, including economic of preventive measures.

Keywords: incidence of children, regional features, socio-economic factors, environmental factors

Procedia PDF Downloads 115
778 Differentiated Surgical Treatment of Patients With Nontraumatic Intracerebral Hematomas

Authors: Mansur Agzamov, Valery Bersnev, Natalia Ivanova, Istam Agzamov, Timur Khayrullaev, Yulduz Agzamova

Abstract:

Objectives. Treatment of hypertensive intracerebral hematoma (ICH) is controversial. Advantage of one surgical method on other has not been established. Recent reports suggest a favorable effect of minimally invasive surgery. We conducted a small comparative study of different surgical methods. Methods. We analyzed the result of surgical treatment of 176 patients with intracerebral hematomas at the age from 41 to 78 years. Men were been113 (64.2%), women - 63 (35.8%). Level of consciousness: conscious -18, lethargy -63, stupor –55, moderate coma - 40. All patients on admission and in the dynamics underwent computer tomography (CT) of the brain. ICH was located in the putamen in 87 cases, thalamus in 19, in the mix area in 50, in the lobar area in 20. Ninety seven patients of them had an intraventricular hemorrhage component. The baseline volume of the ICH was measured according to a bedside method of measuring CT intracerebral hematomas volume. Depending on the intervention of the patients were divided into three groups. Group 1 patients, 90 patients, operated open craniotomy. Level of consciousness: conscious-11, lethargy-33, stupor–18, moderate coma -18. The hemorrhage was located in the putamen in 51, thalamus in 3, in the mix area in 25, in the lobar area in 11. Group 2 patients, 22 patients, underwent smaller craniotomy with endoscopic-assisted evacuation. Level of consciousness: conscious-4, lethargy-9, stupor–5, moderate coma -4. The hemorrhage was located in the putamen in 5, thalamus in 15, in the mix area in 2. Group 3 patients, 64 patients, was conducted minimally invasive removal of intracerebral hematomas using the original device (patent of Russian Federation № 65382). The device - funnel cannula - which after the special markings introduced into the hematoma cavity. Level of consciousness: conscious-3, lethargy-21, stupor–22, moderate coma -18. The hemorrhage was located in the putamen in 31, in the mix area in 23, thalamus in 1, in the lobar area in 9. Results of treatment were evaluated by Glasgow outcome scale. Results. The study showed that the results of surgical treatment in three groups depending on the degree of consciousness, the volume and localization of hematoma. In group 1, good recovery observed in 8 cases (8.9%), moderate disability in 22 (24.4%), severe disability - 17 (18.9%), death-43 (47.8%). In group 2, good recovery observed in 7 cases (31.8%), moderate disability in 7 (31.8%), severe disability - 5 (29.7%), death-7 (31.8%). In group 3, good recovery was observed in 9 cases (14.1%), moderate disability-17 (26.5%), severe disability-19 (29.7%), death-19 (29.7%). Conclusions. The method of using cannulae allowed to abandon from open craniotomy of the majority of patients with putaminal hematomas. Minimally invasive technique reduced the postoperative mortality and improves treatment outcomes of these patients.

Keywords: nontraumatic intracerebral hematoma, minimal invasive surgical technique, funnel canula, differentiated surcical treatment

Procedia PDF Downloads 84
777 Towards End-To-End Disease Prediction from Raw Metagenomic Data

Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker

Abstract:

Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.

Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine

Procedia PDF Downloads 126