Search results for: advanced anode
907 Recent Developments in the Application of Deep Learning to Stock Market Prediction
Authors: Shraddha Jain Sharma, Ratnalata Gupta
Abstract:
Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume
Procedia PDF Downloads 90906 Institutional Capacity of Health Care Institutes for Diagnosis and Management of Common Genetic Diseases-a Study from a North Coastal District of Andhra Pradesh, India
Authors: Koteswara Rao Pagolu, Raghava Rao Tamanam
Abstract:
In India, genetic disease is a disregarded service element in the community health- protection system. This study aims to gauge the accessibility of services for treating genetic disorders and also to evaluate the practices on deterrence and management services in the district health system. A cross-sectional survey of selected health amenities in the government health sector was conducted from 15 primary health centers (PHC’s), 4 community health centers (CHC’s), 1 district government hospital (DGH) and 3 referral hospitals (RH’s). From these, the existing manpower like 130 medical officers (MO’s), 254 supporting staff, 409 nursing staff (NS) and 45 lab technicians (LT’s) was examined. From the side of private health institutions, 25 corporate hospitals (CH’s), 3 medical colleges (MC’s) and 25 diagnostic laboratories (DL’s) were selected for the survey and from these, 316 MO’s, 995 NS and 254 LT’s were also reviewed. The findings show that adequate staff was in place at more than 70% of health centers, but none of the staff have obtained any operative training on genetic disease management. The largest part of the DH’s had rudimentary infrastructural and diagnostic facilities. However, the greater part of the CHC’s and PHC’s had inadequate diagnostic facilities related to genetic disease management. Biochemical, molecular, and cytogenetic services were not available at PHC’s and CHC’s. DH’s, RH’s, and all selected medical colleges were found to have offered the basic Biochemical genetics units during the survey. The district health care infrastructure in India has a shortage of basic services to be provided for the genetic disorder. With some policy resolutions and facility strengthening, it is possible to provide advanced services for a genetic disorder in the district health system.Keywords: district health system, genetic disorder, infrastructural amenities, management practices
Procedia PDF Downloads 179905 Unlocking Justice: Exploring the Power and Challenges of DNA Analysis in the Criminal Justice System
Authors: Sandhra M. Pillai
Abstract:
This article examines the relevance, difficulties, and potential applications of DNA analysis in the criminal justice system. A potent tool for connecting suspects to crime sites, clearing the innocent of wrongdoing, and resolving cold cases, DNA analysis has transformed forensic investigations. The scientific foundations of DNA analysis, including DNA extraction, sequencing, and statistical analysis, are covered in the article. To guarantee accurate and trustworthy findings, it also discusses the significance of quality assurance procedures, chain of custody, and DNA sample storage. DNA analysis has significantly advanced science, but it also brings up substantial moral and legal issues. To safeguard individual rights and uphold public confidence, privacy concerns, possible discrimination, and abuse of DNA information must be properly addressed. The paper also emphasises the effects of the criminal justice system on people and communities while highlighting the necessity of equity, openness, and fair access to DNA testing. The essay describes the obstacles and future directions for DNA analysis. It looks at cutting-edge technology like next-generation sequencing, which promises to make DNA analysis quicker and more affordable. To secure the appropriate and informed use of DNA evidence, it also emphasises the significance of multidisciplinary collaboration among scientists, law enforcement organisations, legal experts, and policymakers. In conclusion, DNA analysis has enormous potential for improving the course of criminal justice. We can exploit the potential of DNA technology while respecting the ideals of justice, fairness, and individual rights by navigating the ethical, legal, and societal issues and encouraging discussion and collaboration.Keywords: DNA analysis, DNA evidence, reliability, validity, legal frame, admissibility, ethical considerations, impact, future direction, challenges
Procedia PDF Downloads 64904 Economic Expansion and Land Use Change in Thailand: An Environmental Impact Analysis Using Computable General Equilibrium Model
Authors: Supakij Saisopon
Abstract:
The process of economic development incurs spatial transformation. This spatial alternation also causes environmental impacts, leading to higher pollution. In the case of Thailand, there is still a lack of price-endogenous quantitative analysis incorporating relationships among economic growth, land-use change, and environmental impact. Therefore, this paper aimed at developing the Computable General Equilibrium (CGE) model with the capability of stimulating such mutual effects. The developed CGE model has also incorporated the nested constant elasticity of transformation (CET) structure that describes the spatial redistribution mechanism between agricultural land and urban area. The simulation results showed that the 1% decrease in the availability of agricultural land lowers the value-added of agricultural by 0.036%. Similarly, the 1% reduction of availability of urban areas can decrease the value-added of manufacturing and service sectors by 0.05% and 0.047%, respectively. Moreover, the outcomes indicate that the increasing farming and urban areas induce higher volumes of solid waste, wastewater, and air pollution. Specifically, the 1% increase in the urban area can increase pollution as follows: (1) the solid waste increase by 0.049%, (2) water pollution ̶ indicated by biochemical oxygen demand (BOD) value ̶ increase by 0.051% and (3) air pollution ̶ indicated by the volumes of CO₂, N₂O, NOₓ, CH₄, and SO₂ ̶ increase within the range of 0.045%–0.051%. With the simulation for exploring the sustainable development path, a 1% increase in agricultural land use efficiency leads to the shrinking demand for agricultural land. But this is not happening in urban, a 1% scale increase in urban utilization results in still increasing demand for land. Therefore, advanced clean production technology is necessary to align the increasing land-use efficiency with the lowered pollution density.Keywords: CGE model, CET structure, environmental impact, land use
Procedia PDF Downloads 231903 Efficacy of Learning: Digital Sources versus Print
Authors: Rahimah Akbar, Abdullah Al-Hashemi, Hanan Taqi, Taiba Sadeq
Abstract:
As technology continues to develop, teaching curriculums in both schools and universities have begun adopting a more computer/digital based approach to the transmission of knowledge and information, as opposed to the more old-fashioned use of textbooks. This gives rise to the question: Are there any differences in learning from a digital source over learning from a printed source, as in from a textbook? More specifically, which medium of information results in better long-term retention? A review of the confounding factors implicated in understanding the relationship between learning from the two different mediums was done. Alongside this, a 4-week cohort study involving 76 1st year English Language female students was performed, whereby the participants were divided into 2 groups. Group A studied material from a paper source (referred to as the Print Medium), and Group B studied material from a digital source (Digital Medium). The dependent variables were grading of memory recall indexed by a 4 point grading system, and total frequency of item repetition. The study was facilitated by advanced computer software called Super Memo. Results showed that, contrary to prevailing evidence, the Digital Medium group showed no statistically significant differences in terms of the shift from Remember (Episodic) to Know (Semantic) when all confounding factors were accounted for. The shift from Random Guess and Familiar to Remember occurred faster in the Digital Medium than it did in the Print Medium.Keywords: digital medium, print medium, long-term memory recall, episodic memory, semantic memory, super memo, forgetting index, frequency of repetitions, total time spent
Procedia PDF Downloads 289902 Establishment of Virtual Fracture Clinic in Princess Royal Hospital Telford: Experience and Recommendations during the First 9 Months
Authors: Tahir Khaleeq, Patrick Lancaster, Keji Fakoya, Pedro Ferreira, Usman Ahmed
Abstract:
Introduction: Virtual fracture clinics (VFC) have been shown to be a safe and cost-effective way of managing outpatient referrals to the orthopaedic department. During the coronavirus pandemic there has been a push to reduce unnecessary patient contact whilst maintaining patient safety. Materials and Methods: A protocol was developed by the clinical team in collaboration with Advanced Physiotherapy Practitioners (APP) on how to manage common musculoskeletal presentations to A&E prior to COVID as part of routine service development. Patients broadly triaged into 4 categories; discharge with advice, referral to VFC, referral to face to face clinic or discussion with on call team. The first 9 months of data were analysed to assess types of injury seen and outcomes. Results: In total 2489 patients were referred to VFC from internal and external sources. 734 patients were discharged without follow-up and 182 patients were discharged for physiotherapy review. Only 3 patients required admission. Regarding follow-ups, 431 patients had a virtual follow-up while 1036 of patients required further face to face follow up. 87 patients were triaged into subspecialty clinics. 37 patients were felt to have been referred inappropriately. Discussion: BOA guidelines suggest all patients need to be reviewed within 72 hours of their orthopaedic injury. Implementation of a VFC allows this target to be achieved and at the same time reduce patient contact. Almost half the patients were discharged following VFC review, the remaining patients were appropriately followed up. This is especially relevant in the current pandemic where reducing unnecessary trips to hospital will benefit the patient as well as make the most of the resources available.Keywords: virtual fracture clinic, lockdown, trauma and orthopaedics, Covid- 19
Procedia PDF Downloads 201901 Assessment of a Coupled Geothermal-Solar Thermal Based Hydrogen Production System
Authors: Maryam Hamlehdar, Guillermo A. Narsilio
Abstract:
To enhance the feasibility of utilising geothermal hot sedimentary aquifers (HSAs) for clean hydrogen production, one approach is the implementation of solar-integrated geothermal energy systems. This detailed modelling study conducts a thermo-economic assessment of an advanced Organic Rankine Cycle (ORC)-based hydrogen production system that uses low-temperature geothermal reservoirs, with a specific focus on hot sedimentary aquifers (HSAs) over a 30-year period. In the proposed hybrid system, solar-thermal energy is used to raise the water temperature extracted from the geothermal production well. This temperature increase leads to a higher steam output, powering the turbine and subsequently enhancing the electricity output for running the electrolyser. Thermodynamic modeling of a parabolic trough solar (PTS) collector is developed and integrated with modeling for a geothermal-based configuration. This configuration includes a closed regenerator cycle (CRC), proton exchange membrane (PEM) electrolyser, and thermoelectric generator (TEG). Following this, the study investigates the impact of solar energy use on the temperature enhancement of the geothermal reservoir. It assesses the resulting consequences on the lifecycle performance of the hydrogen production system in comparison with a standalone geothermal system. The results indicate that, with the appropriate solar collector area, a combined solar-geothermal hydrogen production system outperforms a standalone geothermal system in both cost and rate of production. These findings underscore a solar-assisted geothermal hybrid system holds the potential to generate lower-cost hydrogen with enhanced efficiency, thereby boosting the appeal of numerous low to medium-temperature geothermal sources for hydrogen production.Keywords: clean hydrogen production, integrated solar-geothermal, low-temperature geothermal energy, numerical modelling
Procedia PDF Downloads 69900 A Five-Year Follow-up Survey Using Regression Analysis Finds Only Maternal Age to Be a Significant Medical Predictor for Infertility Treatment
Authors: Lea Stein, Sabine Rösner, Alessandra Lo Giudice, Beate Ditzen, Tewes Wischmann
Abstract:
For many couples bearing children is a consistent life goal; however, it cannot always be fulfilled. Undergoing infertility treatment does not guarantee pregnancies and live births. Couples have to deal with miscarriages and sometimes even discontinue infertility treatment. Significant medical predictors for the outcome of infertility treatment have yet to be fully identified. To further our understanding, a cross-sectional five-year follow-up survey was undertaken, in which 95 women and 82 men that have been treated at the Women’s Hospital of Heidelberg University participated. Binary logistic regressions, parametric and non-parametric methods were used for our sample to determine the relevance of biological (infertility diagnoses, maternal and paternal age) and lifestyle factors (smoking, drinking, over- and underweight) on the outcome of infertility treatment (clinical pregnancy, live birth, miscarriage, dropout rate). During infertility treatment, 72.6% of couples became pregnant and 69.5% were able to give birth. Suffering from miscarriages 27.5% of couples and 20.5% decided to discontinue an unsuccessful fertility treatment. The binary logistic regression models for clinical pregnancies, live births and dropouts were statistically significant for the maternal age, whereas the paternal age in addition to maternal and paternal BMI, smoking, infertility diagnoses and infections, showed no significant predicting effect on any of the outcome variables. The results confirm an effect of maternal age on infertility treatment, whereas the relevance of other medical predictors remains unclear. Further investigations should be considered to increase our knowledge of medical predictors.Keywords: advanced maternal age, assisted reproductive technology, female factor, male factor, medical predictors, infertility treatment, reproductive medicine
Procedia PDF Downloads 110899 Smart Technology Work Practices to Minimize Job Pressure
Authors: Babar Rasheed
Abstract:
The organizations are in continuous effort to increase their yield and to retain their associates, employees. Technology is considered an integral part of attaining apposite work practices, work environment, and employee engagement. Unconsciously, these advanced practices like work from home, personalized intra-network are disturbing employee work-life balance which ultimately increases psychological pressure on employees. The smart work practice is to develop business models and organizational practices with enhanced employee engagement, minimum trouncing of organization resources with persistent revenue and positive addition in global societies. Need of smart work practices comes from increasing employee turnover rate, global economic recession, unnecessary job pressure, increasing contingent workforce and advancement in technologies. Current practices are not enough elastic to tackle global changing work environment and organizational competitions. Current practices are causing many reciprocal problems among employee and organization mechanically. There is conscious understanding among business sectors smart work practices that will deal with new century challenges with addressing the concerns of relevant issues. It is aimed in this paper to endorse customized and smart work practice tools along knowledge framework to manage the growing concerns of employee engagement, use of technology, orgaization concerns and challenges for the business. This includes a Smart Management Information System to address necessary concerns of employees and combine with a framework to extract the best possible ways to allocate companies resources and re-align only required efforts to adopt the best possible strategy for controlling potential risks.Keywords: employees engagement, management information system, psychological pressure, current and future HR practices
Procedia PDF Downloads 185898 Transformer Life Enhancement Using Dynamic Switching of Second Harmonic Feature in IEDs
Authors: K. N. Dinesh Babu, P. K. Gargava
Abstract:
Energization of a transformer results in sudden flow of current which is an effect of core magnetization. This current will be dominated by the presence of second harmonic, which in turn is used to segregate fault and inrush current, thus guaranteeing proper operation of the relay. This additional security in the relay sometimes obstructs or delays differential protection in a specific scenario, when the 2nd harmonic content was present during a genuine fault. This kind of scenario can result in isolation of the transformer by Buchholz and pressure release valve (PRV) protection, which is acted when fault creates more damage in transformer. Such delays involve a huge impact on the insulation failure, and chances of repairing or rectifying fault of problem at site become very dismal. Sometimes this delay can cause fire in the transformer, and this situation becomes havoc for a sub-station. Such occurrences have been observed in field also when differential relay operation was delayed by 10-15 ms by second harmonic blocking in some specific conditions. These incidences have led to the need for an alternative solution to eradicate such unwarranted delay in operation in future. Modern numerical relay, called as intelligent electronic device (IED), is embedded with advanced protection features which permit higher flexibility and better provisions for tuning of protection logic and settings. Such flexibility in transformer protection IEDs, enables incorporation of alternative methods such as dynamic switching of second harmonic feature for blocking the differential protection with additional security. The analysis and precautionary measures carried out in this case, have been simulated and discussed in this paper to ensure that similar solutions can be adopted to inhibit analogous issues in future.Keywords: differential protection, intelligent electronic device (IED), 2nd harmonic inhibit, inrush inhibit
Procedia PDF Downloads 300897 Bridging the Divide: Mixed-Method Analysis of Student Engagement and Outcomes in Diverse Postgraduate Cohorts
Authors: A.Knox
Abstract:
Student diversity in postgraduate classes puts major challenges on educators seeking to encourage student engagement and desired to learn outcomes. This paper outlines the impact of a set of teaching initiatives aimed at addressing challenges associated with teaching and learning in an environment characterized by diversity in the student cohort. The study examines postgraduate students completing the core capstone unit within a specialized business degree. Although relatively small, the student cohort is highly diverse in terms of cultural backgrounds represented, prior learning and/or qualifications, as well as duration and type of work experience relevant to the degree, is completed. The wide range of cultures, existing knowledge and experience create enormous challenges with respect to students’ learning needs and outcomes. Subsequently, a suite of teaching innovations has been adopted to enhance curriculum content/delivery and the design of assessments. This paper explores the impact of these specific teaching and learning practices, examining the ways they have supported students’ diverse needs and enhanced students’ learning outcomes. Data from surveys and focus groups are used to assess the effectiveness of these practices. The results highlight the effectiveness of peer-assisted learning, cultural competence-building, and advanced assessment options in addressing diverse student needs and enhancing student engagement and learning outcomes. These findings suggest that such practices would benefit students’ learning in environments marked by diversity in the student cohort. Specific recommendations are offered for other educators working with diverse classes.Keywords: assessment design, curriculum content, curriculum delivery, student diversity
Procedia PDF Downloads 110896 Real-Time Monitoring of Drinking Water Quality Using Advanced Devices
Authors: Amani Abdallah, Isam Shahrour
Abstract:
The quality of drinking water is a major concern of public health. The control of this quality is generally performed in the laboratory, which requires a long time. This type of control is not adapted for accidental pollution from sudden events, which can have serious consequences on population health. Therefore, it is of major interest to develop real-time innovative solutions for the detection of accidental contamination in drinking water systems This paper presents researches conducted within the SunRise Demonstrator for ‘Smart and Sustainable Cities’ with a particular focus on the supervision of the water quality. This work aims at (i) implementing a smart water system in a large water network (Campus of the University Lille1) including innovative equipment for real-time detection of abnormal events, such as those related to the contamination of drinking water and (ii) develop a numerical modeling of the contamination diffusion in the water distribution system. The first step included verification of the water quality sensors and their effectiveness on a network prototype of 50m length. This part included the evaluation of the efficiency of these sensors in the detection both bacterial and chemical contamination events in drinking water distribution systems. An on-line optical sensor integral with a laboratory-scale distribution system (LDS) was shown to respond rapidly to changes in refractive index induced by injected loads of chemical (cadmium, mercury) and biological contaminations (Escherichia coli). All injected substances were detected by the sensor; the magnitude of the response depends on the type of contaminant introduced and it is proportional to the injected substance concentration.Keywords: distribution system, drinking water, refraction index, sensor, real-time
Procedia PDF Downloads 355895 Predicting Foreign Direct Investment of IC Design Firms from Taiwan to East and South China Using Lotka-Volterra Model
Authors: Bi-Huei Tsai
Abstract:
This work explores the inter-region investment behaviors of integrated circuit (IC) design industry from Taiwan to China using the amount of foreign direct investment (FDI). According to the mutual dependence among different IC design industrial locations, Lotka-Volterra model is utilized to explore the FDI interactions between South and East China. Effects of inter-regional collaborations on FDI flows into China are considered. Evolutions of FDIs into South China for IC design industry significantly inspire the subsequent FDIs into East China, while FDIs into East China for Taiwan’s IC design industry significantly hinder the subsequent FDIs into South China. The supply chain along IC industry includes IC design, manufacturing, packing and testing enterprises. I C manufacturing, packaging and testing industries depend on IC design industry to gain advanced business benefits. The FDI amount from Taiwan’s IC design industry into East China is the greatest among the four regions: North, East, Mid-West and South China. The FDI amount from Taiwan’s IC design industry into South China is the second largest. If IC design houses buy more equipment and bring more capitals in South China, those in East China will have pressure to undertake more FDIs into East China to maintain the leading position advantages of the supply chain in East China. On the other hand, as the FDIs in East China rise, the FDIs in South China will successively decline since capitals have concentrated in East China. Prediction of Lotka-Volterra model in FDI trends is accurate because the industrial interactions between the two regions are included. Finally, this work confirms that the FDI flows cannot reach a stable equilibrium point, so the FDI inflows into East and South China will expand in the future.Keywords: Lotka-Volterra model, foreign direct investment, competitive, Equilibrium analysis
Procedia PDF Downloads 363894 Brain Derived Neurotrophic Factor (BDNF) Down Regulation in Peritoneal Carcinomatosis Patients
Authors: Awan A. Zaima, Tanvieer Ayesha, Mirshahi Shahsoltan, Pocard Marc, Mirshahi Massoud
Abstract:
Brain-derived neurotrophic factor (BDNF) is described as a factor helping to support the survival of existing neurons by involving the growth and differentiation of new neurons and synapses. Cancer diagnosis impacts the mental health, and in consequences, depression arise eventually hinders recovery and disrupts the quality of life and surviving chances of patients. The focus of this study is to hint upon a prospective biomarker as a promising diagnostic tool for an early indicator/predictor of depression prevalence in cancer patients for better care and treatment options. The study aims to analyze peripheral biomarkers from neuro immune axis (BDNF, IL21 as a NK cell activator) using co-relation approach. Samples were obtained from random non cancer candidates and advanced peritoneum carcinomatosis patients with 25% pseudomyxoma, 21% Colon cancer,19% stomach cancer, 10% ovarian cancer, 8% appendices cancer, and 10% other area of peritoneum cancer patients. Both groups of the study were categorized by gender and age, with a range of 18 to 86 years old. Biomarkers were analyzed in collected plasma by performing multiplex sandwich ELISA system. Data were subjected to statistical analysis for the assessment of the correlation. Our results demonstrate that BNDF and IL 21 down regulated significantly in patient groupas compared to non-cancer candidates (ratio of patients/normalis 2.57 for BNDF and 1.32 for IL21). This preliminary investigation suggested that the neuro immune biomarkers are down regulated in carcinomatosis patients and can be associated with cancer expansion and cancer genesis. Further studies on larger cohort are necessary to validate this hypothesis.Keywords: biomarkers, depression, peritoneum carcinoma, BNDF, IL21
Procedia PDF Downloads 116893 TAXAPRO, A Streamlined Pipeline to Analyze Shotgun Metagenomes
Authors: Sofia Sehli, Zainab El Ouafi, Casey Eddington, Soumaya Jbara, Kasambula Arthur Shem, Islam El Jaddaoui, Ayorinde Afolayan, Olaitan I. Awe, Allissa Dillman, Hassan Ghazal
Abstract:
The ability to promptly sequence whole genomes at a relatively low cost has revolutionized the way we study the microbiome. Microbiologists are no longer limited to studying what can be grown in a laboratory and instead are given the opportunity to rapidly identify the makeup of microbial communities in a wide variety of environments. Analyzing whole genome sequencing (WGS) data is a complex process that involves multiple moving parts and might be rather unintuitive for scientists that don’t typically work with this type of data. Thus, to help lower the barrier for less-computationally inclined individuals, TAXAPRO was developed at the first Omics Codeathon held virtually by the African Society for Bioinformatics and Computational Biology (ASBCB) in June 2021. TAXAPRO is an advanced metagenomics pipeline that accurately assembles organelle genomes from whole-genome sequencing data. TAXAPRO seamlessly combines WGS analysis tools to create a pipeline that automatically processes raw WGS data and presents organism abundance information in both a tabular and graphical format. TAXAPRO was evaluated using COVID-19 patient gut microbiome data. Analysis performed by TAXAPRO demonstrated a high abundance of Clostridia and Bacteroidia genera and a low abundance of Proteobacteria genera relative to others in the gut microbiome of patients hospitalized with COVID-19, consistent with the original findings derived using a different analysis methodology. This provides crucial evidence that the TAXAPRO workflow dispenses reliable organism abundance information overnight without the hassle of performing the analysis manually.Keywords: metagenomics, shotgun metagenomic sequence analysis, COVID-19, pipeline, bioinformatics
Procedia PDF Downloads 221892 GIS Based Atmospheric Analysis to Predict Future Temperature Rise Caused by Land Use and Land Cover in Okara by Using Environmental Remote Sensing
Authors: Sumaira Hafeez, Saira Akram
Abstract:
Albeit the populace in metropolitan regions on the planet develops each year, the urban communities battling to adapt to the expanded metropolitan movement grow at different rates. Land Surface Temperature and other atmospheric parameters of the area of not really settled using Landsat pictures more than 10 years isolated. The LULC types were moreover arranged using managed gathering techniques. Quick urbanization is changing the current examples of Land Use Land Cover (LULC) all around the world, which is thusly expanding the Land Surface Temperature (LST) other atmospheric parameters in numerous districts. Present review was centered around assessing the current and recreating the future LULC and Land Surface Temperature patterns in the elevated climate of lower Himalayan district of Pakistan. Past examples of LULC and Land Surface Temperature were distinguished through the multi-unearthly Landsat satellite pictures during the 1995–2019 information period. The future forecasts were made for the year 2030 to work out LULC and LST changes separately, utilizing their previous examples. The review presumes that the reliably extending encroachment of the city's as of late advanced provincial regions over the totally open have went with an overall warming of the district's typical. Meteorological parameters over the earlier ten years and that permitting the land to lie void for a significant long time resulting to clearing the country fields for future metropolitan improvement is a preparation that has lamentable natural effects.Keywords: surface urban heat island, land surface temperature, urban climate change, spatial analysis of meterological and atmospheric science
Procedia PDF Downloads 136891 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0
Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini
Abstract:
Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling
Procedia PDF Downloads 94890 Molecular Insights into the Genetic Integrity of Long-Term Micropropagated Clones Using Start Codon Targeted (SCoT) Markers: A Case Study with Ansellia africana, an Endangered, Medicinal Orchid
Authors: Paromik Bhattacharyya, Vijay Kumar, Johannes Van Staden
Abstract:
Micropropagation is an important tool for the conservation of threatened and commercially important plant species of which orchids deserve special attention. Ansellia africana is one such medicinally important orchid species having much commercial significance. Thus, development of regeneration protocols for producing clonally stable regenerates using axillary buds is of much importance. However, for large-scale micropropagation to become not only successful but also acceptable by end-users, somaclonal variations occurring in the plantlets need to be eliminated. In the light of the various factors (genotype, ploidy level, in vitro culture age, explant and culture type, etc.) that may account for the somaclonal variations of divergent genetic changes at the cellular and molecular levels, genetic analysis of micropropagated plants using a multidisciplinary approach is of utmost importance. In the present study, the clonal integrity of the long term micropropagated A. africana plants were assessed using advanced molecular marker system i.e. Start Codon Targeted Polymorphism (SCoT). Our studies recorded a clonally stable regeneration protocol for A. africana with a very high degree of clonal fidelity amongst the regenerates. The results obtained from these molecular analyses could help in modifying the regeneration protocols for obtaining clonally stable true to type plantlets for sustainable commercial use.Keywords: medicinal orchid micropropagation, start codon targeted polymorphism (SCoT), RAP), traditional African pharmacopoeia, genetic fidelity
Procedia PDF Downloads 426889 Design and Optimisation of 2-Oxoglutarate Dioxygenase Expression in Escherichia coli Strains for Production of Bioethylene from Crude Glycerol
Authors: Idan Chiyanzu, Maruping Mangena
Abstract:
Crude glycerol, a major by-product from the transesterification of triacylglycerides with alcohol to biodiesel, is known to have a broad range of applications. For example, its bioconversion can afford a wide range of chemicals including alcohols, organic acids, hydrogen, solvents and intermediate compounds. In bacteria, the 2-oxoglutarate dioxygenase (2-OGD) enzymes are widely found among the Pseudomonas syringae species and have been recognized with an emerging importance in ethylene formation. However, the use of optimized enzyme function in recombinant systems for crude glycerol conversion to ethylene is still not been reported. The present study investigated the production of ethylene from crude glycerol using engineered E. coli MG1655 and JM109 strains. Ethylene production with an optimized expression system for 2-OGD in E. coli using a codon optimized construct of the ethylene-forming gene was studied. The codon-optimization resulted in a 20-fold increase of protein production and thus an enhanced production of the ethylene gas. For a reliable bioreactor performance, the effect of temperature, fermentation time, pH, substrate concentration, the concentration of methanol, concentration of potassium hydroxide and media supplements on ethylene yield was investigated. The results demonstrate that the recombinant enzyme can be used for future studies to exploit the conversion of low-priced crude glycerol into advanced value products like light olefins, and tools including recombineering techniques for DNA, molecular biology, and bioengineering can be used to allowing unlimited the production of ethylene directly from the fermentation of crude glycerol. It can be concluded that recombinant E.coli production systems represent significantly secure, renewable and environmentally safe alternative to thermochemical approach to ethylene production.Keywords: crude glycerol, bioethylene, recombinant E. coli, optimization
Procedia PDF Downloads 279888 Tokyo Skyscrapers: Technologically Advanced Structures in Seismic Areas
Authors: J. Szolomicki, H. Golasz-Szolomicka
Abstract:
The architectural and structural analysis of selected high-rise buildings in Tokyo is presented in this paper. The capital of Japan is the most densely populated city in the world and moreover is located in one of the most active seismic zones. The combination of these factors has resulted in the creation of sophisticated designs and innovative engineering solutions, especially in the field of design and construction of high-rise buildings. The foreign architectural studios (as, for Jean Nouvel, Kohn Pedesen Associates, Skidmore, Owings & Merill) which specialize in the designing of skyscrapers, played a major role in the development of technological ideas and architectural forms for such extraordinary engineering structures. Among the projects completed by them, there are examples of high-rise buildings that set precedents for future development. An essential aspect which influences the design of high-rise buildings is the necessity to take into consideration their dynamic reaction to earthquakes and counteracting wind vortices. The need to control motions of these buildings, induced by the force coming from earthquakes and wind, led to the development of various methods and devices for dissipating energy which occur during such phenomena. Currently, Japan is a global leader in seismic technologies which safeguard seismic influence on high-rise structures. Due to these achievements the most modern skyscrapers in Tokyo are able to withstand earthquakes with a magnitude of over seven degrees at the Richter scale. Damping devices applied are of a passive, which do not require additional power supply or active one which suppresses the reaction with the input of extra energy. In recent years also hybrid dampers were used, with an additional active element to improve the efficiency of passive damping.Keywords: core structures, damping system, high-rise building, seismic zone
Procedia PDF Downloads 175887 Soil-Less Misting System: A Technology for Hybrid Seed Production in Tomato (Lycopersicon esculentum Mill.).
Authors: K. D. Rajatha, S. Rajendra Prasad, N. Nethra
Abstract:
Aeroponics is one of the advanced techniques to cultivate plants without soil with minimal water and nutrient consumption. This is the technology which could bring the vertical growth in agriculture. It is an eco-friendly approach widely used for commercial cultivation of vegetables to obtain the supreme quality and yield. In this context, to harvest potentiality of the technology, an experiment was designed to evaluate the suitability of the aeroponics method over the conventional method for hybrid seed production of tomato. The experiment was carried out under Completely Randomized Design with Factorial (FCRD) concept with three replications during the year 2017-18 at UAS, GKVK Bengaluru. Nutrients and pH were standardized; among the six different nutrient solutions, the crop performance was better in Hoagland’s solution with pH between 5.5-7. The results of the present study revealed that between TAG1F and TAG2F parental lines, TAG1F performed better in both the methods of seed production. Among the methods, aeroponics showed better performance for the quality parameters except for plant spread, due to better availability of nutrients and aeration, huge root biomass in aeroponics. Aeroponics method showed significantly higher plant length (124.9 cm), plant growth rate (0.669), seedling survival rate (100%), early flowering (27.5 days), highest fruit weight (121.5 g), 100 seed weight (0.373 g) and total seed yield plant⁻¹ (11.68 g) compared to the conventional method. By providing the best environment for plant growth, the genetically best possible plant could be grown, thus complete potentiality of the plant could be harvested. Hence, aeroponics could be a promising tool for quality and healthy hybrid seed production throughout the year within protected cultivation.Keywords: aeroponics, Hoagland’s solution, hybrid seed production, Lycopersicon esculentum
Procedia PDF Downloads 102886 An Exploratory Sequential Design: A Mixed Methods Model for the Statistics Learning Assessment with a Bayesian Network Representation
Authors: Zhidong Zhang
Abstract:
This study established a mixed method model in assessing statistics learning with Bayesian network models. There are three variants in exploratory sequential designs. There are three linked steps in one of the designs: qualitative data collection and analysis, quantitative measure, instrument, intervention, and quantitative data collection analysis. The study used a scoring model of analysis of variance (ANOVA) as a content domain. The research study is to examine students’ learning in both semantic and performance aspects at fine grain level. The ANOVA score model, y = α+ βx1 + γx1+ ε, as a cognitive task to collect data during the student learning process. When the learning processes were decomposed into multiple steps in both semantic and performance aspects, a hierarchical Bayesian network was established. This is a theory-driven process. The hierarchical structure was gained based on qualitative cognitive analysis. The data from students’ ANOVA score model learning was used to give evidence to the hierarchical Bayesian network model from the evidential variables. Finally, the assessment results of students’ ANOVA score model learning were reported. Briefly, this was a mixed method research design applied to statistics learning assessment. The mixed methods designs expanded more possibilities for researchers to establish advanced quantitative models initially with a theory-driven qualitative mode.Keywords: exploratory sequential design, ANOVA score model, Bayesian network model, mixed methods research design, cognitive analysis
Procedia PDF Downloads 179885 Recovery of Au and Other Metals from Old Electronic Components by Leaching and Liquid Extraction Process
Authors: Tomasz Smolinski, Irena Herdzik-Koniecko, Marta Pyszynska, M. Rogowski
Abstract:
Old electronic components can be easily found nowadays. Significant quantities of valuable metals such as gold, silver or copper are used for the production of advanced electronic devices. Old useless electronic device slowly became a new source of precious metals, very often more efficient than natural. For example, it is possible to recover more gold from 1-ton personal computers than seventeen tons of gold ore. It makes urban mining industry very profitable and necessary for sustainable development. For the recovery of metals from waste of electronic equipment, various treatment options based on conventional physical, hydrometallurgical and pyrometallurgical processes are available. In this group hydrometallurgy processes with their relatively low capital cost, low environmental impact, potential for high metal recoveries and suitability for small scale applications, are very promising options. Institute of Nuclear Chemistry and Technology has great experience in hydrometallurgy processes especially focused on recovery metals from industrial and agricultural wastes. At the moment, urban mining project is carried out. The method of effective recovery of valuable metals from central processing units (CPU) components has been developed. The principal processes such as acidic leaching and solvent extraction were used for precious metals recovery from old processors and graphic cards. Electronic components were treated by acidic solution at various conditions. Optimal acid concentration, time of the process and temperature were selected. Precious metals have been extracted to the aqueous phase. At the next step, metals were selectively extracted by organic solvents such as oximes or tributyl phosphate (TBP) etc. Multistage mixer-settler equipment was used. The process was optimized.Keywords: electronic waste, leaching, hydrometallurgy, metal recovery, solvent extraction
Procedia PDF Downloads 137884 Silica Nanofibres – Promising Material for Regenerative Medicine
Authors: Miroslava Rysová, Zdena Syrová, Tomáš Zajíc, Petr Exnar
Abstract:
Currently, attention of tissue engineers has been attracted to novel nanofibrous materials having advanced properties and ability to mimic extracellular matrix (ECM) by structure which makes them interesting candidates for application in regenerative medicine as scaffolding and/or drug delivering material. Throughout the last decade, more than 200 synthetic and natural polymers have been successfully electrospun leading to the formation of nanofibres with a wide range of chemical, mechanical and degradation properties. In this family, inorganic nanofibres represent very specific group offering an opportunity to manufacture inert to body, well degradable and in properties tunable material. Aim of this work, was to reveal unique properties of silica (SiO2, CAS 7631-86-9) nanofibres and their potential in field of regenerative medicine. Silica nanofibres were prepared by sol-gel method from tetraethyl orthosilicate (TEOS, CAS 78-10-4) as a precursor and subsequently manufactured by needleless electrospinning on NanospiderTM device. Silica nanofibres thermally stabilized under 200°C were confirmed to be fully biodegradable and soluble in several simulated body fluids. In vitro cytotoxicity tests of eluate (ES ISO 10993-5:1999) and in direct contact (ES ISO 10993-5:2009) showed no toxicity - e.g. cell viabilities reached values exceeding 80%. Those results were obtained equally from two different cell lines (Vero, 3T3). Non-toxicity of silaca nanofibres´ eluate was additionally confirmed in real time by testing on xCelligence (ACEA Biosciences, Inc.) device. Both cell types also showed good adhesion to material. To conclude, all mentioned results lead to resumption that silica nanofibres have a potential as material for regenerative medicine which opens door to further research.Keywords: cytotoxicity, electrospinning, nanofibres, silica, tissue engineering
Procedia PDF Downloads 429883 Shape Management Method for Safety Evaluation of Bridge Based on Terrestrial Laser Scanning Using Least Squares
Authors: Gichun Cha, Dongwan Lee, Junkyeong Kim, Aoqi Zhang, Seunghee Park
Abstract:
All the world are studying the construction technology of double deck tunnel in order to respond to the increasing urban traffic demands and environmental changes. Advanced countries have the construction technology of the double deck tunnel structure. but the domestic country began research on it. Construction technologies are important. But Safety evaluation of structure is necessary to prevent possible accidents during construction. Thus, the double deck tunnel was required the shape management of middle slabs. The domestic country is preparing the construction of double deck tunnel for an alternate route and a pleasant urban environment. Shape management of double deck tunnel has been no research because it is a new attempted technology. The present, a similar study is bridge structure for the shape management. Bridge is implemented shape model using terrestrial laser scanning(TLS). Therefore, we proceed research on the bridge slabs because there is a similar structure of double deck tunnel. In the study, we develop shape management method of bridge slabs using TLS. We select the Test-bed for measurement site. This site is bridge located on Sungkyunkwan University Natural Sciences Campus. This bridge has a total length of 34m, the vertical height of 8.7m from the ground. It connects Engineering Building #1 and Engineering Building #2. Point cloud data for shape management is acquired the TLS and We utilized the Leica ScanStation C10/C5 model. We will confirm the Maximum displacement area of middle slabs using Least-Squares Fitting. We expect to raise stability for double deck tunnel through shape management for middle slabs.Keywords: bridge slabs, least squares, safety evaluation, shape management method, terrestrial laser scanning
Procedia PDF Downloads 241882 Simulation-Based Evaluation of Indoor Air Quality and Comfort Control in Non-Residential Buildings
Authors: Torsten Schwan, Rene Unger
Abstract:
Simulation of thermal and electrical building performance more and more becomes part of an integrative planning process. Increasing requirements on energy efficiency, the integration of volatile renewable energy, smart control and storage management often cause tremendous challenges for building engineers and architects. This mainly affects commercial or non-residential buildings. Their energy consumption characteristics significantly distinguish from residential ones. This work focuses on the many-objective optimization problem indoor air quality and comfort, especially in non-residential buildings. Based on a brief description of intermediate dependencies between different requirements on indoor air treatment it extends existing Modelica-based building physics models with additional system states to adequately represent indoor air conditions. Interfaces to corresponding HVAC (heating, ventilation, and air conditioning) system and control models enable closed-loop analyzes of occupants' requirements and energy efficiency as well as profitableness aspects. A complex application scenario of a nearly-zero-energy school building shows advantages of presented evaluation process for engineers and architects. This way, clear identification of air quality requirements in individual rooms together with realistic model-based description of occupants' behavior helps to optimize HVAC system already in early design stages. Building planning processes can be highly improved and accelerated by increasing integration of advanced simulation methods. Those methods mainly provide suitable answers on engineers' and architects' questions regarding more exuberant and complex variety of suitable energy supply solutions.Keywords: indoor air quality, dynamic simulation, energy efficient control, non-residential buildings
Procedia PDF Downloads 232881 A Comparison of Clinical and Pathological TNM Staging in a COVID-19 Era
Authors: Sophie Mills, Leila L. Touil, Richard Sisson
Abstract:
Introduction: The TNM classification is the global standard for the staging of head and neck cancers. Accurate clinical-radiological staging of tumours (cTNM) is essential to predict prognosis, facilitate surgical planning and determine the need for other therapeutic modalities. This study aims to determine the accuracy of pre-operative cTNM staging using pathological TNM (pTNM) and consider possible causes of TNM stage migration, noting any variation throughout the COVID-19 pandemic. Materials and Methods: A retrospective cohort study examined records of patients with surgical management of head and neck cancer at a tertiary head and neck centre from November 2019 to November 2020. Data was extracted from Somerset Cancer Registry and histopathology reports. cTNM and pTNM were compared before and during the first wave of COVID-19, as well as with other potential prognostic factors such as tumour site and tumour stage. Results: 119 cases were identified, of which 52.1% (n=62) were male, and 47.9% (n=57) were female with a mean age of 67 years. Clinical and pathological staging differed in 54.6% (n=65) of cases. Of the patients with stage migration, 40.4% (n=23) were up-staged and 59.6% (n=34) were down-staged compared with pTNM. There was no significant difference in the accuracy of cTNM staging compared with age, sex, or tumour site. There was a statistically highly significant (p < 0.001) correlation between cTNM accuracy and tumour stage, with the accuracy of cTNM staging decreasing with the advancement of pTNM staging. No statistically significant variation was noted between patients staged prior to and during COVID-19. Conclusions: Discrepancies in staging can impact management and outcomes for patients. This study found that the higher the pTNM, the more likely stage migration will occur. These findings are concordant with the oncology literature, which highlights the need to improve the accuracy of cTNM staging for more advanced tumours.Keywords: COVID-19, head and neck cancer, stage migration, TNM staging
Procedia PDF Downloads 109880 Enhancing the Interpretation of Group-Level Diagnostic Results from Cognitive Diagnostic Assessment: Application of Quantile Regression and Cluster Analysis
Authors: Wenbo Du, Xiaomei Ma
Abstract:
With the empowerment of Cognitive Diagnostic Assessment (CDA), various domains of language testing and assessment have been investigated to dig out more diagnostic information. What is noticeable is that most of the extant empirical CDA-based research puts much emphasis on individual-level diagnostic purpose with very few concerned about learners’ group-level performance. Even though the personalized diagnostic feedback is the unique feature that differentiates CDA from other assessment tools, group-level diagnostic information cannot be overlooked in that it might be more practical in classroom setting. Additionally, the group-level diagnostic information obtained via current CDA always results in a “flat pattern”, that is, the mastery/non-mastery of all tested skills accounts for the two highest proportion. In that case, the outcome does not bring too much benefits than the original total score. To address these issues, the present study attempts to apply cluster analysis for group classification and quantile regression analysis to pinpoint learners’ performance at different proficiency levels (beginner, intermediate and advanced) thus to enhance the interpretation of the CDA results extracted from a group of EFL learners’ reading performance on a diagnostic reading test designed by PELDiaG research team from a key university in China. The results show that EM method in cluster analysis yield more appropriate classification results than that of CDA, and quantile regression analysis does picture more insightful characteristics of learners with different reading proficiencies. The findings are helpful and practical for instructors to refine EFL reading curriculum and instructional plan tailored based on the group classification results and quantile regression analysis. Meanwhile, these innovative statistical methods could also make up the deficiencies of CDA and push forward the development of language testing and assessment in the future.Keywords: cognitive diagnostic assessment, diagnostic feedback, EFL reading, quantile regression
Procedia PDF Downloads 146879 Transition From Economic Growth-Energy Use to Green Growth-Green Energy Towards Environmental Quality: Evidence from Africa Using Econometric Approaches
Authors: Jackson Niyongabo
Abstract:
This study addresses a notable gap in the existing literature on the relationship between energy consumption, economic growth, and CO₂ emissions, particularly within the African context. While numerous studies have explored these dynamics globally and regionally across various development levels, few have delved into the nuances of regions and income levels specific to African countries. Furthermore, the evaluation of the interplay between green growth policies, green energy technologies, and their impact on environmental quality has been underexplored. This research aims to fill these gaps by conducting a comprehensive analysis of the transition from conventional economic growth and energy consumption to a paradigm of green growth coupled with green energy utilization across the African continent from 1980 to 2018. The study is structured into three main parts: an empirical examination of the long-term effects of energy intensity, renewable energy consumption, and economic growth on CO₂ emissions across diverse African regions and income levels; an estimation of the long-term impact of green growth and green energy use on CO₂ emissions for countries implementing green policies within Africa, as well as at regional and global levels; and a comparative analysis of the impact of green growth policies on environmental degradation before and after implementation. Employing advanced econometric methods and panel estimators, the study utilizes a testing framework, panel unit tests, and various estimators to derive meaningful insights. The anticipated results and conclusions will be elucidated through causality tests, impulse response, and variance decomposition analyses, contributing valuable knowledge to the discourse on sustainable development in the African context.Keywords: economic growth, green growth, energy consumption, CO₂ emissions, econometric models, green energy
Procedia PDF Downloads 58878 The Results of Longitudinal Water Quality Monitoring of the Brandywine River, Chester County, Pennsylvania by High School Students
Authors: Dina L. DiSantis
Abstract:
Strengthening a sense of responsibility while relating global sustainability concepts such as water quality and pollution to a local water system can be achieved by teaching students to conduct and interpret water quality monitoring tests. When students conduct their own research, they become better stewards of the environment. Providing outdoor learning and place-based opportunities for students helps connect them to the natural world. By conducting stream studies and collecting data, students are able to better understand how the natural environment is a place where everything is connected. Students have been collecting physical, chemical and biological data along the West and East Branches of the Brandywine River, in Pennsylvania for over ten years. The stream studies are part of the advanced placement environmental science and aquatic science courses that are offered as electives to juniors and seniors at the Downingtown High School West Campus in Downingtown, Pennsylvania. Physical data collected includes: temperature, turbidity, width, depth, velocity, and volume of flow or discharge. The chemical tests conducted are: dissolved oxygen, carbon dioxide, pH, nitrates, alkalinity and phosphates. Macroinvertebrates are collected with a kick net, identified and then released. Students collect the data from several locations while traveling by canoe. In the classroom, students prepare a water quality data analysis and interpretation report based on their collected data. The summary of the results from longitudinal water quality data collection by students, as well as the strengths and weaknesses of student data collection will be presented.Keywords: place-based, student data collection, sustainability, water quality monitoring
Procedia PDF Downloads 156