Search results for: large woody debris
1299 Controlling Shape and Position of Silicon Micro-nanorolls Fabricated using Fine Bubbles during Anodization
Authors: Yodai Ashikubo, Toshiaki Suzuki, Satoshi Kouya, Mitsuya Motohashi
Abstract:
Functional microstructures such as wires, fins, needles, and rolls are currently being applied to variety of high-performance devices. Under these conditions, a roll structure (silicon micro-nanoroll) was formed on the surface of the silicon substrate via fine bubbles during anodization using an extremely diluted hydrofluoric acid (HF + H₂O). The as-formed roll had a microscale length and width of approximately 1 µm. The number of rolls was 3-10 times and the thickness of the film forming the rolls was about 10 nm. Thus, it is promising for applications as a distinct device material. These rolls functioned as capsules and/or pipelines. To date, number of rolls and roll length have been controlled by anodization conditions. In general, controlling the position and roll winding state is required for device applications. However, it has not been discussed. Grooves formed on silicon surface before anodization might be useful control the bubbles. In this study, we investigated the effect of the grooves on the position and shape of the roll. The surfaces of the silicon wafers were anodized. The starting material was p-type (100) single-crystalline silicon wafers. The resistivity of the wafer is 5-20 ∙ cm. Grooves were formed on the surface of the substrate before anodization using sandpaper and diamond pen. The average width and depth of the grooves were approximately 1 µm and 0.1 µm, respectively. The HF concentration {HF/ (HF + C₂H5OH + H₂O)} was 0.001 % by volume. The C2H5OH concentration {C₂H5OH/ (HF + C₂H5OH + H₂O)} was 70 %. A vertical single-tank cell and Pt cathode were used for anodization. The silicon roll was observed by field-emission scanning electron microscopy (FE-SEM; JSM-7100, JEOL). The atomic bonding state of the rolls was evaluated using X-ray photoelectron spectroscopy (XPS; ESCA-3400, Shimadzu). For straight groove, the rolls were formed along the groove. This indicates that the orientation of the rolls can be controlled by the grooves. For lattice-like groove, the rolls formed inside the lattice and along the long sides. In other words, the aspect ratio of the lattice is very important for the roll formation. In addition, many rolls were formed and winding states were not uniform when the lattice size is too large. On the other hand, no rolls were formed for small lattice. These results indicate that there is the optimal size of lattice for roll formation. In the future, we are planning on formation of rolls using groove formed by lithography technique instead of sandpaper and the pen. Furthermore, the rolls included nanoparticles will be formed for nanodevices.Keywords: silicon roll, anodization, fine bubble, microstructure
Procedia PDF Downloads 181298 Maturity Level of Knowledge Management in Whole Life Costing in the UK Construction Industry: An Empirical Study
Authors: Ndibarefinia Tobin
Abstract:
The UK construction industry has been under pressure for many years to produce economical buildings which offer value for money, not only during the construction phase, but more importantly, during the full life of the building. Whole life costing is considered as an economic analysis tool that takes into account the total investment cost in and ownership, operation and subsequent disposal of a product or system to which the whole life costing method is being applied. In spite of its importance, the practice is still crippled by the lack of tangible evidence, ‘know-how’ skills and knowledge of the practice i.e. the lack of professionals with the knowledge and training on the use of the practice in construction project, this situation is compounded by the absence of available data on whole life costing from relevant projects, lack of data collection mechanisms and so on. The aforementioned problems has forced many construction organisations to adopt project enhancement initiatives to boost their performance on the use of whole life costing techniques so as to produce economical buildings which offer value for money during the construction stage also the whole life of the building/asset. The management of knowledge in whole life costing is considered as one of the many project enhancement initiative and it is becoming imperative in the performance and sustainability of an organisation. Procuring building projects using whole life costing technique is heavily reliant on the knowledge, experience, ideas and skills of workers, which comes from many sources including other individuals, electronic media and documents. Due to the diversity of knowledge, capabilities and skills of employees that vary across an organisation, it is significant that they are directed and coordinated efficiently so as to capture, retrieve and share knowledge in order to improve the performance of the organisation. The implementation of knowledge management concept has different levels in each organisation. Measuring the maturity level of knowledge management in whole life costing practice will paint a comprehensible picture of how knowledge is managed in construction organisations. Purpose: The purpose of this study is to identify knowledge management maturity in UK construction organisations adopting whole life costing in construction project. Design/methodology/approach: This study adopted a survey method and conducted by distributing questionnaires to large construction companies that implement knowledge management activities in whole life costing practice in construction project. Four level of knowledge management maturity was proposed on this study. Findings: From the results obtained in the study shows that 34 contractors at the practiced level, 26 contractors at managed level and 12 contractors at continuously improved level.Keywords: knowledge management, whole life costing, construction industry, knowledge
Procedia PDF Downloads 2441297 Relatively High Heart-Rate Variability Predicts Greater Survival Chances in Patients with Covid-19
Authors: Yori Gidron, Maartje Mol, Norbert Foudraine, Frits Van Osch, Joop Van Den Bergh, Moshe Farchi, Maud Straus
Abstract:
Background: The worldwide pandemic of severe acute respiratory syndrome coronavirus 2 (SARS-COV2), which began in 2019, also known as Covid-19, has infected over 136 million people and tragically took the lives of over 2.9 million people worldwide. Many of the complications and deaths are predicted by the inflammatory “cytokine storm.” One way to progress in the prevention of death is by finding a predictive and protective factor that inhibits inflammation, on the one hand, and which also increases anti-viral immunity on the other hand. The vagal nerve does precisely both actions. This study examined whether vagal nerve activity, indexed by heart-rate variability (HRV), predicts survival in patients with Covid-19. Method: We performed a pseudo-prospective study, where we retroactively obtained ECGs of 271 Covid-19 patients arriving at a large regional hospital in The Netherlands. HRV was indexed by the standard deviation of the intervals between normal heartbeats (SDNN). We examined patients’ survival at 3 weeks and took into account multiple confounders and known prognostic factors (e.g., age, heart disease, diabetes, hypertension). Results: Patients’ mean age was 68 (range: 25-95) and nearly 22% of the patients had died by 3 weeks. Their mean SDNN (17.47msec) was far below the norm (50msec). Importantly, relatively higher HRV significantly predicted a higher chance of survival, after statistically controlling for patients’ age, cardiac diseases, hypertension and diabetes (relative risk, H.R, and 95% confidence interval (95%CI): H.R = 0.49, 95%CI: 0.26 – 0.95, p < 0.05). However, since HRV declines rapidly with age and since age is a profound predictor in Covid-19, we split the sample by median age (40). Subsequently, we found that higher HRV significantly predicted greater survival in patients older than 70 (H.R = 0.35, 95%CI: 0.16 – 0.78, p = 0.01), but HRV did not predict survival in patients below age 70 years (H.R = 1.11, 95%CI: 0.37 – 3.28, p > 0.05). Conclusions: To the best of our knowledge, this is the first study showing that higher vagal nerve activity, as indexed by HRV, is an independent predictor of higher chances for survival in Covid-19. The results are in line with the protective role of the vagal nerve in diseases and extend this to a severe infectious illness. Studies should replicate these findings and then test in controlled trials whether activating the vagus nerve may prevent mortality in Covid-19.Keywords: Covid-19, heart-rate Variability, prognosis, survival, vagal nerve
Procedia PDF Downloads 1751296 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.
Procedia PDF Downloads 891295 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food
Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite
Abstract:
The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction
Procedia PDF Downloads 1951294 Synthesis and Characterization of LiCoO2 Cathode Material by Sol-Gel Method
Authors: Nur Azilina Abdul Aziz, Tuti Katrina Abdullah, Ahmad Azmin Mohamad
Abstract:
Lithium-transition metals and some of their oxides, such as LiCoO2, LiMn2O2, LiFePO4, and LiNiO2 have been used as cathode materials in high performance lithium-ion rechargeable batteries. Among the cathode materials, LiCoO2 has potential to been widely used as a lithium-ion battery because of its layered crystalline structure, good capacity, high cell voltage, high specific energy density, high power rate, low self-discharge, and excellent cycle life. This cathode material has been widely used in commercial lithium-ion batteries due to its low irreversible capacity loss and good cycling performance. However, there are several problems that interfere with the production of material that has good electrochemical properties, including the crystallinity, the average particle size and particle size distribution. In recent years, synthesis of nanoparticles has been intensively investigated. Powders prepared by the traditional solid-state reaction have a large particle size and broad size distribution. On the other hand, solution method can reduce the particle size to nanometer range and control the particle size distribution. In this study, LiCoO2 was synthesized using the sol–gel preparation method, which Lithium acetate and Cobalt acetate were used as reactants. The stoichiometric amounts of the reactants were dissolved in deionized water. The solutions were stirred for 30 hours using magnetic stirrer, followed by heating at 80°C under vigorous stirring until a viscous gel was formed. The as-formed gel was calcined at 700°C for 7 h under a room atmosphere. The structural and morphological analysis of LiCoO2 was characterized using X-ray diffraction and Scanning electron microscopy. The diffraction pattern of material can be indexed based on the α-NaFeO2 structure. The clear splitting of the hexagonal doublet of (006)/(102) and (108)/(110) in this patterns indicates materials are formed in a well-ordered hexagonal structure. No impurity phase can be seen in this range probably due to the homogeneous mixing of the cations in the precursor. Furthermore, SEM micrograph of the LiCoO2 shows the particle size distribution is almost uniform while particle size is between 0.3-0.5 microns. In conclusion, LiCoO2 powder was successfully synthesized using the sol–gel method. LiCoO2 showed a hexagonal crystal structure. The sample has been prepared clearly indicate the pure phase of LiCoO2. Meanwhile, the morphology of the sample showed that the particle size and size distribution of particles is almost uniform.Keywords: cathode material, LiCoO2, lithium-ion rechargeable batteries, Sol-Gel method
Procedia PDF Downloads 3731293 Identifying and Understand Pragmatic Failures in Portuguese Foreign Language by Chinese Learners in Macau
Authors: Carla Lopes
Abstract:
It is clear nowadays that the proper performance of different speech acts is one of the most difficult obstacles that a foreign language learner has to overcome to be considered communicatively competent. This communication presents the results of an investigation on the pragmatic performance of Portuguese Language students at the University of Macau. The research discussed herein is based on a survey consisting of fourteen speaking situations to which the participants must respond in writing, and that includes different types of speech acts: apology, response to a compliment, refusal, complaint, disagreement and the understanding of the illocutionary force of indirect speech acts. The responses were classified in a five levels Likert scale (quantified from 1 to 5) according to their suitability for the particular situation. In general terms, we can summarize that about 45% of the respondents' answers were pragmatically competent, 10 % were acceptable and 45 % showed weaknesses at socio-pragmatic competence level. Given that the linguistic deviations were not taken into account, we can conclude that the faults are of cultural origin. It is natural that in the presence of orthogonal cultures, such as Chinese and Portuguese, there are failures of this type, barely solved in the four years of the undergraduate program. The target population, native speakers of Cantonese or Mandarin, make their first contact with the English language before joining the Bachelor of Portuguese Language. An analysis of the socio - pragmatic failures in the respondents’ answers suggests the conclusion that many of them are due to the lack of cultural knowledge. They try to compensate for this either using their native culture or resorting to a Western culture that they consider close to the Portuguese, that is the English or US culture, previously studied, and also widely present in the media and on the internet. This phenomenon, known as 'pragmatic transfer', can result in a linguistic behavior that may be considered inauthentic or pragmatically awkward. The resulting speech act is grammatically correct but is not pragmatically feasible, since it is not suitable to the culture of the target language, either because it does not exist or because the conditions of its use are in fact different. Analysis of the responses also supports the conclusion that these students present large deviations from the expected and stereotyped behavior of Chinese students. We can speculate while this linguistic behavior is the consequence of the Macao globalization that culturally casts the students, makes them more open, and distinguishes them from the typical Chinese students.Keywords: Portuguese foreign language, pragmatic failures, pragmatic transfer, pragmatic competence
Procedia PDF Downloads 2101292 Determining the Effective Substance of Cottonseed Extract on the Treatment of Leishmaniasis
Authors: Mehrosadat Mirmohammadi, Sara Taghdisi, Ali Padash, Mohammad Hossein Pazandeh
Abstract:
Gossypol, a yellowish anti-nutritional compound found in cotton plants, exists in various plant parts, including seeds, husks, leaves, and stems. Chemically, gossypol is a potent polyphenolic aldehyde with antioxidant and therapeutic properties. However, its free form can be toxic, posing risks to both humans and animals. Initially, we extracted gossypol from cotton seeds using n-hexane as a solvent (yield: 84.0 ± 4.0%). We also obtained cotton seed and cotton boll extracts via Soxhlet extraction (25:75 hydroalcoholic ratio). These extracts, combined with cornstarch, formed four herbal medicinal formulations. Ethical approval allowed us to investigate their effects on Leishmania-caused skin wounds, comparing them to glucantime (local ampoule). Herbal formulas outperformed the control group (ethanol only) in wound treatment (p-value 0.05). The average wound diameter after two months did not significantly differ between plant extract ointments and topical glucantime. Notably, cotton boll extract with 1% extra gossypol crystal showed the best therapeutic effect. We extracted gossypol from cotton seeds using n-hexane via Soxhlet extraction. Saponification, acidification, and recrystallization steps followed. FTIR, UV-Vis, and HPLC analyses confirmed the product’s identity. Herbal medicines from cotton seeds effectively treated chronic wounds compared to the ethanol-only control group. Wound diameter differed significantly between extract ointments and glucantime injections. It seems that due to the presence of large amounts of fat in the oil, the extraction of gossypol from it faces many obstacles. The extraction of this compound with our technique showed that extraction from oil has a higher efficiency, perhaps because of the preparation of oil by cold pressing method, the possibility of losing this compound is much less than when extraction is done with Soxhlet. On the other hand, the gossypol in the oil is mostly bound to the protein, which somehow protects the gossypol until the last stage of the extraction process. Since this compound is very sensitive to light and heat, it was extracted as a derivative with acetic acid. Also, in the treatment section, it was found that the ointment prepared with the extract is more effective and Gossypol is one of the effective ingredients in the treatment. Therefore, gossypol can be extracted from the oil and added to the extract from which gossypol has been extracted to make an effective medicine with a certain dose.Keywords: cottonseed, glucantime, gossypol, leishmaniasis
Procedia PDF Downloads 611291 Impact of Urban Densification on Travel Behaviour: Case of Surat and Udaipur, India
Authors: Darshini Mahadevia, Kanika Gounder, Saumya Lathia
Abstract:
Cities, an outcome of natural growth and migration, are ever-expanding due to urban sprawl. In the Global South, urban areas are experiencing a switch from public transport to private vehicles, coupled with intensified urban agglomeration, leading to frequent longer commutes by automobiles. This increase in travel distance and motorized vehicle kilometres lead to unsustainable cities. To achieve the nationally pledged GHG emission mitigation goal, the government is prioritizing a modal shift to low-carbon transport modes like mass transit and paratransit. Mixed land-use and urban densification are crucial for the economic viability of these projects. Informed by desktop assessment of mobility plans and in-person primary surveys, the paper explores the challenges around urban densification and travel patterns in two Indian cities of contrasting nature- Surat, a metropolitan industrial city with a 5.9 million population and a very compact urban form, and Udaipur, a heritage city attracting large international tourists’ footfall, with limited scope for further densification. Dense, mixed-use urban areas often improve access to basic services and economic opportunities by reducing distances and enabling people who don't own personal vehicles to reach them on foot/ cycle. But residents travelling on different modes end up contributing to similar trip lengths, highlighting the non-uniform distribution of land-uses and lack of planned transport infrastructure in the city and the urban-peri urban networks. Additionally, it is imperative to manage these densities to reduce negative externalities like congestion, air/noise pollution, lack of public spaces, loss of livelihood, etc. The study presents a comparison of the relationship between transport systems with the built form in both cities. The paper concludes with recommendations for managing densities in urban areas along with promoting low-carbon transport choices like improved non-motorized transport and public transport infrastructure and minimizing personal vehicle usage in the Global South.Keywords: India, low-carbon transport, travel behaviour, trip length, urban densification
Procedia PDF Downloads 2161290 Antigen Stasis can Predispose Primary Ciliary Dyskinesia (PCD) Patients to Asthma
Authors: Nadzeya Marozkina, Joe Zein, Benjamin Gaston
Abstract:
Introduction: We have observed that many patients with Primary Ciliary Dyskinesia (PCD) benefit from asthma medications. In healthy airways, the ciliary function is normal. Antigens and irritants are rapidly cleared, and NO enters the gas phase normally to be exhaled. In the PCD airways, however, antigens, such as Dermatophagoides, are not as well cleared. This defect leads to oxidative stress, marked by increased DUOX1 expression and decreased superoxide dismutase [SOD] activity (manuscript under revision). H₂O₂, in high concentrations in the PCD airway, injures the airway. NO is oxidized rather than being exhaled, forming cytotoxic peroxynitrous acid. Thus, antigen stasis on PCD airway epithelium leads to airway injury and may predispose PCD patients to asthma. Indeed, recent population genetics suggest that PCD genes may be associated with asthma. We therefore hypothesized that PCD patients would be predisposed to having asthma. Methods. We analyzed our database of 18 million individual electronic medical records (EMRs) in the Indiana Network for Patient Care research database (INPCR). There is not an ICD10 code for PCD itself; code Q34.8 is most commonly used clinically. To validate analysis of this code, we queried patients who had an ICD10 code for both bronchiectasis and situs inversus totalis in INPCR. We also studied a validation cohort using the IBM Explorys® database (over 80 million individuals). Analyses were adjusted for age, sex and race using a 1 PCD: 3 controls matching method in INPCR and multivariable logistic regression in the IBM Explorys® database. Results. The prevalence of asthma ICD10 codes in subjects with a code Q34.8 was 67% vs 19% in controls (P < 0.0001) (Regenstrief Institute). Similarly, in IBM*Explorys, the OR [95% CI] for having asthma if a patient also had ICD10 code 34.8, relative to controls, was =4.04 [3.99; 4.09]. For situs inversus alone the OR [95% CI] was 4.42 [4.14; 4.71]; and bronchiectasis alone the OR [95% CI] =10.68 (10.56; 10.79). For both bronchiectasis and situs inversus together, the OR [95% CI] =28.80 (23.17; 35.81). Conclusions: PCD causes antigen stasis in the human airway (under review), likely predisposing to asthma in addition to oxidative and nitrosative stress and to airway injury. Here, we show that, by several different population-based metrics, and using two large databases, patients with PCD appear to have between a three- and 28-fold increased risk of having asthma. These data suggest that additional studies should be undertaken to understand the role of ciliary dysfunction in the pathogenesis and genetics of asthma. Decreased antigen clearance caused by ciliary dysfunction may be a risk factor for asthma development.Keywords: antigen, PCD, asthma, nitric oxide
Procedia PDF Downloads 1061289 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis
Authors: Coriolano Salvini, Ambra Giovannelli
Abstract:
The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.
Procedia PDF Downloads 2281288 Using Photogrammetric Techniques to Map the Mars Surface
Authors: Ahmed Elaksher, Islam Omar
Abstract:
For many years, Mars surface has been a mystery for scientists. Lately with the help of geospatial data and photogrammetric procedures researchers were able to capture some insights about this planet. Two of the most imperative data sources to explore Mars are the The High Resolution Imaging Science Experiment (HiRISE) and the Mars Orbiter Laser Altimeter (MOLA). HiRISE is one of six science instruments carried by the Mars Reconnaissance Orbiter, launched August 12, 2005, and managed by NASA. The MOLA sensor is a laser altimeter carried by the Mars Global Surveyor (MGS) and launched on November 7, 1996. In this project, we used MOLA-based DEMs to orthorectify HiRISE optical images for generating a more accurate and trustful surface of Mars. The MOLA data was interpolated using the kriging interpolation technique. Corresponding tie points were digitized from both datasets. These points were employed in co-registering both datasets using GIS analysis tools. In this project, we employed three different 3D to 2D transformation models. These are the parallel projection (3D affine) transformation model; the extended parallel projection transformation model; the Direct Linear Transformation (DLT) model. A set of tie-points was digitized from both datasets. These points were split into two sets: Ground Control Points (GCPs), used to evaluate the transformation parameters using least squares adjustment techniques, and check points (ChkPs) to evaluate the computed transformation parameters. Results were evaluated using the RMSEs between the precise horizontal coordinates of the digitized check points and those estimated through the transformation models using the computed transformation parameters. For each set of GCPs, three different configurations of GCPs and check points were tested, and average RMSEs are reported. It was found that for the 2D transformation models, average RMSEs were in the range of five meters. Increasing the number of GCPs from six to ten points improve the accuracy of the results with about two and half meters. Further increasing the number of GCPs didn’t improve the results significantly. Using the 3D to 2D transformation parameters provided three to two meters accuracy. Best results were reported using the DLT transformation model. However, increasing the number of GCPS didn’t have substantial effect. The results support the use of the DLT model as it provides the required accuracy for ASPRS large scale mapping standards. However, well distributed sets of GCPs is a key to provide such accuracy. The model is simple to apply and doesn’t need substantial computations.Keywords: mars, photogrammetry, MOLA, HiRISE
Procedia PDF Downloads 571287 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 631286 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data
Authors: Minjuan Sun
Abstract:
Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.Keywords: credit score, digital footprint, Fintech, machine learning
Procedia PDF Downloads 1601285 Sensitivity to Misusing Verb Inflections in Both Finite and Non-Finite Clauses in Native and Non-Native Russian: A Self-Paced Reading Investigation
Authors: Yang Cao
Abstract:
Analyzing the oral production of Chinese-speaking learners of English as a second language (L2), we can find a large variety of verb inflections – Why does it seem so hard for them to use consistent correct past morphologies in obligatory past contexts? Failed Functional Features Hypothesis (FFFH) attributes the rather non-target-like performance to the absence of [±past] feature in their L1 Chinese, arguing that for post puberty learners, new features in L2 are no more accessible. By contrast, Missing Surface Inflection Hypothesis (MSIH) tends to believe that all features are actually acquirable for late L2 learners, while due to the mapping difficulties from features to forms, it is hard for them to realize the consistent past morphologies on the surface. However, most of the studies are limited to the verb morphologies in finite clauses and few studies have ever attempted to figure out these learners’ performance in non-finite clauses. Additionally, it has been discussed that Chinese learners may be able to tell the finite/infinite distinction (i.e. the [±finite] feature might be selected in Chinese, even though the existence of [±past] is denied). Therefore, adopting a self-paced reading task (SPR), the current study aims to analyze the processing patterns of Chinese-speaking learners of L2 Russian, in order to find out if they are sensitive to misuse of tense morphologies in both finite and non-finite clauses and whether they are sensitive to the finite/infinite distinction presented in Russian. The study targets L2 Russian due to its systematic morphologies in both present and past tenses. A native Russian group, as well as a group of English-speaking learners of Russian, whose L1 has definitely selected both [±finite] and [±past] features, will also be involved. By comparing and contrasting performance of the three language groups, the study is going to further examine and discuss the two theories, FFFH and MSIH. Preliminary hypotheses are: a) Russian native speakers are expected to spend longer time reading the verb forms which violate the grammar; b) it is expected that Chinese participants are, at least, sensitive to the misuse of inflected verbs in non-finite clauses, although no sensitivity to the misuse of infinitives in finite clauses might be found. Therefore, an interaction of finite and grammaticality is expected to be found, which indicate that these learners are able to tell the finite/infinite distinction; and c) having selected [±finite] and [±past], English-speaking learners of Russian are expected to behave target-likely, supporting L1 transfer.Keywords: features, finite clauses, morphosyntax, non-finite clauses, past morphologies, present morphologies, Second Language Acquisition, self-paced reading task, verb inflections
Procedia PDF Downloads 1081284 Development of Vertically Integrated 2D Lake Victoria Flow Models in COMSOL Multiphysics
Authors: Seema Paul, Jesper Oppelstrup, Roger Thunvik, Vladimir Cvetkovic
Abstract:
Lake Victoria is the second largest fresh water body in the world, located in East Africa with a catchment area of 250,000 km², of which 68,800 km² is the actual lake surface. The hydrodynamic processes of the shallow (40–80 m deep) water system are unique due to its location at the equator, which makes Coriolis effects weak. The paper describes a St.Venant shallow water model of Lake Victoria developed in COMSOL Multiphysics software, a general purpose finite element tool for solving partial differential equations. Depth soundings taken in smaller parts of the lake were combined with recent more extensive data to resolve the discrepancies of the lake shore coordinates. The topography model must have continuous gradients, and Delaunay triangulation with Gaussian smoothing was used to produce the lake depth model. The model shows large-scale flow patterns, passive tracer concentration and water level variations in response to river and tracer inflow, rain and evaporation, and wind stress. Actual data of precipitation, evaporation, in- and outflows were applied in a fifty-year simulation model. It should be noted that the water balance is dominated by rain and evaporation and model simulations are validated by Matlab and COMSOL. The model conserves water volume, the celerity gradients are very small, and the volume flow is very slow and irrotational except at river mouths. Numerical experiments show that the single outflow can be modelled by a simple linear control law responding only to mean water level, except for a few instances. Experiments with tracer input in rivers show very slow dispersion of the tracer, a result of the slow mean velocities, in turn, caused by the near-balance of rain with evaporation. The numerical and hydrodynamical model can evaluate the effects of wind stress which is exerted by the wind on the lake surface that will impact on lake water level. Also, model can evaluate the effects of the expected climate change, as manifest in changes to rainfall over the catchment area of Lake Victoria in the future.Keywords: bathymetry, lake flow and steady state analysis, water level validation and concentration, wind stress
Procedia PDF Downloads 2271283 Effect of Automatic Self Transcending Meditation on Perceived Stress and Sleep Quality in Adults
Authors: Divya Kanchibhotla, Shashank Kulkarni, Shweta Singh
Abstract:
Chronic stress and sleep quality reduces mental health and increases the risk of developing depression and anxiety as well. There is increasing evidence for the utility of meditation as an adjunct clinical intervention for conditions like depression and anxiety. The present study is an attempt to explore the impact of Sahaj Samadhi Meditation (SSM), a category of Automatic Self Transcending Meditation (ASTM), on perceived stress and sleep quality in adults. The study design was a single group pre-post assessment. Perceived Stress Scale (PSS) and the Pittsburgh Sleep Quality Index (PSQI) were used in this study. Fifty-two participants filled PSS, and 60 participants filled PSQI at the beginning of the program (day 0), after two weeks (day 16) and at two months (day 60). Significant pre-post differences for the perceived stress level on Day 0 - Day 16 (p < 0.01; Cohen's d = 0.46) and Day 0 - Day 60 (p < 0.01; Cohen's d = 0.76) clearly demonstrated that by practicing SSM, participants experienced reduction in the perceived stress. The effect size of the intervention observed on the 16th day of assessment was small to medium, but on the 60th day, a medium to large effect size of the intervention was observed. In addition to this, significant pre-post differences for the sleep quality on Day 0 - Day 16 and Day 0 - Day 60 (p < 0.05) clearly demonstrated that by practicing SSM, participants experienced improvement in the sleep quality. Compared with Day 0 assessment, participants demonstrated significant improvement in the quality of sleep on Day 16 and Day 60. The effect size of the intervention observed on the 16th day of assessment was small, but on the 60th day, a small to medium effect size of the intervention was observed. In the current study we found out that after practicing SSM for two months, participants reported a reduction in the perceived stress, they felt that they are more confident about their ability to handle personal problems, were able to cope with all the things that they had to do, felt that they were on top of the things, and felt less angered. Participants also reported that their overall sleep quality improved; they took less time to fall asleep; they had less disturbances in sleep and less daytime dysfunction due to sleep deprivation. The present study provides clear evidence of the efficacy and safety of non-pharmacological interventions such as SSM in reducing stress and improving sleep quality. Thus, ASTM may be considered a useful intervention to reduce psychological distress in healthy, non-clinical populations, and it can be an alternative remedy for treating poor sleep among individuals and decreasing the use of harmful sedatives.Keywords: automatic self transcending meditation, Sahaj Samadhi meditation, sleep, stress
Procedia PDF Downloads 1341282 Population Dynamics of Cyprinid Fish Species (Mahseer: Tor Species) and Its Conservation in Yamuna River of Garhwal Region, India
Authors: Davendra Singh Malik
Abstract:
India is one of the mega-biodiversity countries in the world and contributing about 11.72% of global fish diversity. The Yamuna river is the longest tributary of Ganga river ecosystem, providing a natural habitat for existing fish diversity of Himalayan region of Indian subcontinent. The several hydropower dams and barrages have been constructed on different locations of major rivers in Garhwal region. These dams have caused a major ecological threat to change existing fresh water ecosystems altering water flows, interrupting ecological connectivity, fragmenting habitats and native riverine fish species. Mahseer fishes (Indian carp) of the genus Tor, are large cyprinids endemic to continental Asia popularly known as ‘Game or sport fishes’ have continued to be decimated by fragmented natural habitats due to damming the water flow in riverine system and categorized as threatened fishes of India. The fresh water fish diversity as 24 fish species were recorded from Yamuna river. The present fish catch data has revealed that mahseer fishes (Tor tor and Tor putitora) were contributed about 32.5 %, 25.6 % and 18.2 % in upper, middle and lower riverine stretches of Yaumna river. The length range of mahseer (360-450mm) recorded as dominant size of catch composition. The CPUE (catch per unit effort) of mahseer fishes also indicated about a sharp decline of fish biomass, changing growth pattern, sex ratio and maturity stages of fishes. Only 12.5 – 14.8 % mahseer female brooders have showed only maturity phases in breeding months. The fecundity of mature mahseer female fish brooders ranged from 2500-4500 no. of ova during breeding months. The present status of mahseer fishery has attributed to the over exploitative nature in Yamuna river. The mahseer population is shrinking continuously in down streams of Yamuna river due to cumulative effects of various ecological stress. Mahseer conservation programme have implemented as 'in situ fish conservation' for enhancement of viable population size of mahseer species and restore the genetic loss of mahseer fish germplasm in Yamuna river of Garhwal Himalayan region.Keywords: conservation practice, population dynamics, tor fish species, Yamuna River
Procedia PDF Downloads 2551281 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers
Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran
Abstract:
With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.Keywords: optical fiber, multi-mode, data centers, encircled flux
Procedia PDF Downloads 3751280 Diverse High-Performing Teams: An Interview Study on the Balance of Demands and Resources
Authors: Alana E. Jansen
Abstract:
With such a large proportion of organisations relying on the use of team-based structures, it is surprising that so few teams would be classified as high-performance teams. While the impact of team composition on performance has been researched frequently, there have been conflicting findings as to the effects, particularly when examined alongside other team factors. To broaden the theoretical perspectives on this topic and potentially explain some of the inconsistencies in research findings left open by other various models of team effectiveness and high-performing teams, the present study aims to use the Job-Demands-Resources model, typically applied to burnout and engagement, as a framework to examine how team composition factors (particularly diversity in team member characteristics) can facilitate or hamper team effectiveness. This study used a virtual interview design where participants were asked to both rate and describe their experiences, in one high-performing and one low-performing team, over several factors relating to demands, resources, team composition, and team effectiveness. A semi-structured interview protocol was developed, which combined the use of the Likert style and exploratory questions. A semi-targeted sampling approach was used to invite participants ranging in age, gender, and ethnic appearance (common surface-level diversity characteristics) and those from different specialties, roles, educational and industry backgrounds (deep-level diversity characteristics). While the final stages of data analyses are still underway, thematic analysis using a grounded theory approach was conducted concurrently with data collection to identify the point of thematic saturation, resulting in 35 interviews being completed. Analyses examine differences in perceptions of demands and resources as they relate to perceived team diversity. Preliminary results suggest that high-performing and low-performing teams differ in perceptions of the type and range of both demands and resources. The current research is likely to offer contributions to both theory and practice. The preliminary findings suggest there is a range of demands and resources which vary between high and low-performing teams, factors which may play an important role in team effectiveness research going forward. Findings may assist in explaining some of the more complex interactions between factors experienced in the team environment, making further progress towards understanding the intricacies of why only some teams achieve high-performance status.Keywords: diversity, high-performing teams, job demands and resources, team effectiveness
Procedia PDF Downloads 1871279 Identification of Igneous Intrusions in South Zallah Trough-Sirt Basin
Authors: Mohamed A. Saleem
Abstract:
Using mostly seismic data, this study intends to show some examples of igneous intrusions found in some areas of the Sirt Basin and explore the period of their emplacement as well as the interrelationships between these sills. The study area is located in the south of the Zallah Trough, south-west Sirt basin, Libya. It is precisely between the longitudes 18.35ᵒ E and 19.35ᵒ E, and the latitudes 27.8ᵒ N and 28.0ᵒ N. Based on a variety of criteria that are usually used as marks on the igneous intrusions, twelve igneous intrusions (Sills), have been detected and analysed using 3D seismic data. One or more of the following were used as identification criteria: the high amplitude reflectors paired with abrupt reflector terminations, vertical offsets, or what is described as a dike-like connection, the violation, the saucer form, and the roughness. Because of their laying between the hosting layers, the majority of these intrusions are classified as sills. Another distinguishing feature is the intersection geometry link between some of these sills. Every single sill has given a name just to distinguish the sills from each other such as S-1, S-2, and …S-12. To avoid the repetition of description, the common characteristics and some statistics of these sills are shown in summary tables, while the specific characters that are not common and have been noticed for each sill are shown individually. The sills, S-1, S-2, and S-3, are approximately parallel to one other, with the shape of these sills being governed by the syncline structure of their host layers. The faults that dominated the strata (pre-upper Cretaceous strata) have a significant impact on the sills; they caused their discontinuity, while the upper layers have a shape of anticlines. S-1 and S-10 are the group's deepest and highest sills, respectively, with S-1 seated near the basement's top and S-10 extending into the sequence of the upper cretaceous. The dramatic escalation of sill S-4 can be seen in N-S profiles. The majority of the interpreted sills are influenced and impacted by a large number of normal faults that strike in various directions and propagate vertically from the surface to the basement's top. This indicates that the sediment sequences were existed before the sill’s intrusion, were deposited, and that the younger faults occurred more recently. The pre-upper cretaceous unit is the current geological depth for the Sills S-1, S-2 … S-9, while Sills S-10, S-11, and S-12 are hosted by the Cretaceous unit. Over the sills S-1, S-2, and S-3, which are the deepest sills, the pre-upper cretaceous surface has a slightly forced folding, these forced folding is also noticed above the right and left tips of sill S-8 and S-6, respectively, while the absence of these marks on the above sequences of layers supports the idea that the aforementioned sills were emplaced during the early upper cretaceous period.Keywords: Sirt Basin, Zallah Trough, igneous intrusions, seismic data
Procedia PDF Downloads 1131278 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 1091277 Biodegradation of Phenazine-1-Carboxylic Acid by Rhodanobacter sp. PCA2 Proceeds via Decarboxylation and Cleavage of Nitrogen-Containing Ring
Authors: Miaomiao Zhang, Sabrina Beckmann, Haluk Ertan, Rocky Chau, Mike Manefield
Abstract:
Phenazines are a large class of nitrogen-containing aromatic heterocyclic compounds, which are almost exclusively produced by bacteria from diverse genera including Pseudomonas and Streptomyces. Phenazine-1-carboxylic acid (PCA) as one of 'core' phenazines are converted from chorismic acid before modified to other phenazine derivatives in different cells. Phenazines have attracted enormous interests because of their multiple roles on biocontrol, bacterial interaction, biofilm formation and fitness of their producers. However, in spite of ecological importance, degradation as a part of phenazines’ fate only have extremely limited attention now. Here, to isolate PCA-degrading bacteria, 200 mg L-1 PCA was supplied as sole carbon, nitrogen and energy source in minimal mineral medium. Quantitative PCR and Reverse-transcript PCR were employed to study abundance and activity of functional gene MFORT 16269 in PCA degradation, respectively. Intermediates and products of PCA degradation were identified with LC-MS/MS. After enrichment and isolation, a PCA-degrading strain was selected from soil and was designated as Rhodanobacter sp. PCA2 based on full 16S rRNA sequencing. As determined by HPLC, strain PCA2 consumed 200 mg L-1 (836 µM) PCA at a rate of 17.4 µM h-1, accompanying with significant cells yield from 1.92 × 105 to 3.11 × 106 cells per mL. Strain PCA2 was capable of degrading other phenazines as well, including phenazine (4.27 µM h-1), pyocyanin (2.72 µM h-1), neutral red (1.30 µM h-1) and 1-hydroxyphenazine (0.55 µM h-1). Moreover, during the incubation, transcript copies of MFORT 16269 gene increased significantly from 2.13 × 106 to 8.82 × 107 copies mL-1, which was 2.77 times faster than that of the corresponding gene copy number (2.20 × 106 to 3.32 × 107 copies mL-1), indicating that MFORT 16269 gene was activated and played roles on PCA degradation. As analyzed by LC-MS/MS, decarboxylation from the ring structure was determined as the first step of PCA degradation, followed by cleavage of nitrogen-containing ring by dioxygenase which catalyzed phenazine to nitrosobenzene. Subsequently, phenylhydroxylamine was detected after incubation for two days and was then transferred to aniline and catechol. Additionally, genomic and proteomic analyses were also carried out for strain PCA2. Overall, the findings presented here showed that a newly isolated strain Rhodanobacter sp. PCA2 was capable of degrading phenazines through decarboxylation and cleavage of nitrogen-containing ring, during which MFORT 16269 gene was activated and played important roles.Keywords: decarboxylation, MFORT16269 gene, phenazine-1-carboxylic acid degradation, Rhodanobacter sp. PCA2
Procedia PDF Downloads 2231276 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 1251275 BLS-2/BSL-3 Laboratory for Diagnosis of Pathogens on the Colombia-Ecuador Border Region: A Post-COVID Commitment to Public Health
Authors: Anderson Rocha-Buelvas, Jaqueline Mena Huertas, Edith Burbano Rosero, Arsenio Hidalgo Troya, Mauricio Casas Cruz
Abstract:
COVID-19 is a disruptive pandemic for the public health and economic system of whole countries, including Colombia. Nariño Department is the southwest of the country and draws attention to being on the border with Ecuador, constantly facing demographic transition affecting infections between countries. In Nariño, the early routine diagnosis of SARS-CoV-2, which can be handled at BSL-2, has affected the transmission dynamics of COVID-19. However, new emerging and re-emerging viruses with biological flexibility classified as a Risk Group 3 agent can take advantage of epidemiological opportunities, generating the need to increase clinical diagnosis, mainly in border regions between countries. The overall objective of this project was to assure the quality of the analytical process in the diagnosis of high biological risk pathogens in Nariño by building a laboratory that includes biosafety level (BSL)-2 and (BSL)-3 containment zones. The delimitation of zones was carried out according to the Verification Tool of the National Health Institute of Colombia and following the standard requirements for the competence of testing and calibration laboratories of the International Organization for Standardization. This is achieved by harmonization of methods and equipment for effective and durable diagnostics of the large-scale spread of highly pathogenic microorganisms, employing negative-pressure containment systems and UV Systems in accordance with a finely controlled electrical system and PCR systems as new diagnostic tools. That increases laboratory capacity. Protection in BSL-3 zones will separate the handling of potentially infectious aerosols within the laboratory from the community and the environment. It will also allow the handling and inactivation of samples with suspected pathogens and the extraction of molecular material from them, allowing research with pathogens with high risks, such as SARS-CoV-2, Influenza, and syncytial virus, and malaria, among others. The diagnosis of these pathogens will be articulated across the spectrum of basic, applied, and translational research that could receive about 60 daily samples. It is expected that this project will be articulated with the health policies of neighboring countries to increase research capacity.Keywords: medical laboratory science, SARS-CoV-2, public health surveillance, Colombia
Procedia PDF Downloads 911274 Using a Phenomenological Approach to Explore the Experiences of Nursing Students in Coping with Their Emotional Responses in Caring for End-Of-Life Patients
Authors: Yun Chan Lee
Abstract:
Background: End-of-life care is a large area of all nursing practice and student nurses are likely to meet dying patients in many placement areas. It is therefore important to understand the emotional responses and coping strategies of student nurses in order for nursing education systems to have some appreciation of how nursing students might be supported in the future. Methodology: This research used a qualitative phenomenological approach. Six student nurses understanding a degree-level adult nursing course were interviewed. Their responses to questions were analyzed using interpretative phenomenological analysis. Finding: The findings identified 3 main themes. First, the common experience of ‘unpreparedness’. A very small number of participants felt that this was unavoidable and that ‘no preparation is possible’, the majority felt that they were unprepared because of ‘insufficient input’ from the university and as a result of wider ‘social taboos’ around death and dying. The second theme showed that emotions were affected by ‘the personal connection to the patient’ and the important sub-themes of ‘the evoking of memories’, ‘involvement in care’ and ‘sense of responsibility’. The third theme, the coping strategies used by students, seemed to fall into two broad areas those ‘internal’ with the student and those ‘external’. In terms of the internal coping strategies, ‘detachment’, ‘faith’, ‘rationalization’ and ‘reflective skills’ are the important components of this part. Regarding the external coping strategies, ‘clinical staff’ and ‘the importance of family and friends’ are the importance of accessing external forms of support. Implication: It is clear that student nurses are affected emotionally by caring for dying patients and many of them have apprehension even before they begin on their placements but very often this is unspoken. Those anxieties before the placement become more pronounced during and continue after the placements. This has implications for when support is offered and possibly its duration. Another significant point of the study is that participants often highlighted their wish to speak to qualified nurses after their experiences of being involved in end-of-life care and especially when they had been present at the time of death. Many of the students spoke that qualified nurses were not available to them. This seemed to be due to a number of reasons. Because the qualified nurses were not available, students had to make use of family members and friends to talk to. Consequently, the implication of this study is not only to educate student nurses but also to educate the qualified mentors on the importance of providing emotional support to students.Keywords: nursing students, coping strategies, end-of-life care, emotional responses
Procedia PDF Downloads 1621273 Biogas Production from Kitchen Waste for a Household Sustainability
Authors: Vuiswa Lucia Sethunya, Tonderayi Matambo, Diane Hildebrandt
Abstract:
South African’s informal settlements produce tonnes of kitchen waste (KW) per year which is dumped into the landfill. These landfill sites are normally located in close proximity to the household of the poor communities; this is a problem in which the young children from those communities end up playing in these landfill sites which may result in some health hazards because of methane, carbon dioxide and sulphur gases which are produced. To reduce this large amount of organic materials being deposited into landfills and to provide a cleaner place for those within the community especially the children, an energy conversion process such as anaerobic digestion of the organic waste to produce biogas was implemented. In this study, the digestion of various kitchen waste was investigated in order to understand and develop a system that is suitable for household use to produce biogas for cooking. Three sets of waste of different nutritional compositions were digested as per acquired in the waste streams of a household at mesophilic temperature (35ᵒC). These sets of KW were co-digested with cow dung (CW) at different ratios to observe the microbial behaviour and the system’s stability in a laboratory scale system. The gas chromatography-flame ionization detector analyses have been performed to identify and quantify the presence of organic compounds in the liquid samples from co-digested and mono-digested food waste. Acetic acid, propionic acid, butyric acid and valeric acid are the fatty acids which were studied. Acetic acid (1.98 g/L), propionic acid (0.75 g/L) and butyric acid (2.16g/L) were the most prevailing fatty acids. The results obtained from organic acids analysis suggest that the KW can be an innovative substituent to animal manure for biogas production. The faster degradation period in which the microbes break down the organic compound to produce the fatty acids during the anaerobic process of KW also makes it a better feedstock during high energy demand periods. The C/N ratio analysis showed that from the three waste streams the first stream containing vegetables (55%), fruits (16%), meat (25%) and pap (4%) yielded more methane-based biogas of 317mL/g of volatile solids (VS) at C/N of 21.06. Generally, this shows that a household will require a heterogeneous composition of nutrient-based waste to be fed into the digester to acquire the best biogas yield to sustain a households cooking needs.Keywords: anaerobic digestion, biogas, kitchen waste, household
Procedia PDF Downloads 1991272 Linguistic Analysis of Argumentation Structures in Georgian Political Speeches
Authors: Mariam Matiashvili
Abstract:
Argumentation is an integral part of our daily communications - formal or informal. Argumentative reasoning, techniques, and language tools are used both in personal conversations and in the business environment. Verbalization of the opinions requires the use of extraordinary syntactic-pragmatic structural quantities - arguments that add credibility to the statement. The study of argumentative structures allows us to identify the linguistic features that make the text argumentative. Knowing what elements make up an argumentative text in a particular language helps the users of that language improve their skills. Also, natural language processing (NLP) has become especially relevant recently. In this context, one of the main emphases is on the computational processing of argumentative texts, which will enable the automatic recognition and analysis of large volumes of textual data. The research deals with the linguistic analysis of the argumentative structures of Georgian political speeches - particularly the linguistic structure, characteristics, and functions of the parts of the argumentative text - claims, support, and attack statements. The research aims to describe the linguistic cues that give the sentence a judgmental/controversial character and helps to identify reasoning parts of the argumentative text. The empirical data comes from the Georgian Political Corpus, particularly TV debates. Consequently, the texts are of a dialogical nature, representing a discussion between two or more people (most often between a journalist and a politician). The research uses the following approaches to identify and analyze the argumentative structures Lexical Classification & Analysis - Identify lexical items that are relevant in argumentative texts creating process - Creating the lexicon of argumentation (presents groups of words gathered from a semantic point of view); Grammatical Analysis and Classification - means grammatical analysis of the words and phrases identified based on the arguing lexicon. Argumentation Schemas - Describe and identify the Argumentation Schemes that are most likely used in Georgian Political Speeches. As a final step, we analyzed the relations between the above mentioned components. For example, If an identified argument scheme is “Argument from Analogy”, identified lexical items semantically express analogy too, and they are most likely adverbs in Georgian. As a result, we created the lexicon with the words that play a significant role in creating Georgian argumentative structures. Linguistic analysis has shown that verbs play a crucial role in creating argumentative structures.Keywords: georgian, argumentation schemas, argumentation structures, argumentation lexicon
Procedia PDF Downloads 711271 Preparation and CO2 Permeation Properties of Carbonate-Ceramic Dual-Phase Membranes
Authors: H. Ishii, S. Araki, H. Yamamoto
Abstract:
In recent years, the carbon dioxide (CO2) separation technology is required in terms of the reduction of emission of global warming gases and the efficient use of fossil fuels. Since the emission amount of CO2 gas occupies the large part of greenhouse effect gases, it is considered that CO2 have the most influence on global warming. Therefore, we need to establish the CO2 separation technologies with high efficiency at low cost. In this study, we focused on the membrane separation compared with conventional separation technique such as distillation or cryogenic separation. In this study, we prepared carbonate-ceramic dual-phase membranes to separate CO2 at high temperature. As porous ceramic substrate, the (Pr0.9La0.1)2(Ni0.74Cu0.21Ga0.05)O4+σ, La0.6Sr0.4Ti0.3 Fe0.7O3 and Ca0.8Sr0.2Ti0.7Fe0.3O3-α (PLNCG, LSTF and CSTF) were examined. PLNCG, LSTF and CSTF have the perovskite structure. The perovskite structure has high stability and shows ion-conducting doped by another metal ion. PLNCG, LSTF and CSTF have perovskite structure and has high stability and high oxygen ion diffusivity. PLNCG, LSTF and CSTF powders were prepared by a solid-phase process using the appropriate carbonates or oxides. To prepare porous substrates, these powders mixed with carbon black (20 wt%) and a few drops of polyvinyl alcohol (5 wt%) aqueous solution. The powder mixture were packed into stainless steel mold (13 mm) and uniaxially pressed into disk shape under a pressure of 20 MPa for 1 minute. PLNCG, LSTF and CSTF disks were calcined in air for 6 h at 1473, 1573 and 1473 K, respectively. The carbonate mixture (Li2CO3/Na2CO3/K2CO3: 42.5/32.5/25 in mole percent ratio) was placed inside a crucible and heated to 793 K. Porous substrates were infiltrated with the molten carbonate mixture at 793 K. Crystalline structures of the fresh membranes and after the infiltration with the molten carbonate mixtures were determined by X-ray diffraction (XRD) measurement. We confirmed the crystal structure of PLNCG and CSTF slightly changed after infiltration with the molten carbonate mixture. CO2 permeation experiments with PLNCG-carbonate, LSTF-carbonate and CSTF-carbonate membranes were carried out at 773-1173 K. The gas mixture of CO2 (20 mol%) and He was introduced at the flow rate of 50 ml/min to one side of membrane. The permeated CO2 was swept by N2 (50 ml/min). We confirmed the effect of ceramic materials and temperature on the CO2 permeation at high temperature.Keywords: membrane, perovskite structure, dual-phase, carbonate
Procedia PDF Downloads 3671270 Analyzing the Risk Based Approach in General Data Protection Regulation: Basic Challenges Connected with Adapting the Regulation
Authors: Natalia Kalinowska
Abstract:
The adoption of the General Data Protection Regulation, (GDPR) finished the four-year work of the European Commission in this area in the European Union. Considering far-reaching changes, which will be applied by GDPR, the European legislator envisaged two-year transitional period. Member states and companies have to prepare for a new regulation until 25 of May 2018. The idea, which becomes a new look at an attitude to data protection in the European Union is risk-based approach. So far, as a result of implementation of Directive 95/46/WE, in many European countries (including Poland) there have been adopted very particular regulations, specifying technical and organisational security measures e.g. Polish implementing rules indicate even how long password should be. According to the new approach from May 2018, controllers and processors will be obliged to apply security measures adequate to level of risk associated with specific data processing. The risk in GDPR should be interpreted as the likelihood of a breach of the rights and freedoms of the data subject. According to Recital 76, the likelihood and severity of the risk to the rights and freedoms of the data subject should be determined by reference to the nature, scope, context and purposes of the processing. GDPR does not indicate security measures which should be applied – in recitals there are only examples such as anonymization or encryption. It depends on a controller’s decision what type of security measures controller considered as sufficient and he will be responsible if these measures are not sufficient or if his identification of risk level is incorrect. Data protection regulation indicates few levels of risk. Recital 76 indicates risk and high risk, but some lawyers think, that there is one more category – low risk/now risk. Low risk/now risk data processing is a situation when it is unlikely to result in a risk to the rights and freedoms of natural persons. GDPR mentions types of data processing when a controller does not have to evaluate level of risk because it has been classified as „high risk” processing e.g. processing on a large scale of special categories of data, processing with using new technologies. The methodology will include analysis of legal regulations e.g. GDPR, the Polish Act on the Protection of personal data. Moreover: ICO Guidelines and articles concerning risk based approach in GDPR. The main conclusion is that an appropriate risk assessment is a key to keeping data safe and avoiding financial penalties. On the one hand, this approach seems to be more equitable, not only for controllers or processors but also for data subjects, but on the other hand, it increases controllers’ uncertainties in the assessment which could have a direct impact on incorrect data protection and potential responsibility for infringement of regulation.Keywords: general data protection regulation, personal data protection, privacy protection, risk based approach
Procedia PDF Downloads 252