Search results for: intelligent network selection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7359

Search results for: intelligent network selection

909 Networking Approach for Historic Urban Landscape: Case Study of the Porcelain Capital of China

Authors: Ding He, Ping Hu

Abstract:

This article presents a “networking approach” as an alternative to the “layering model” in the issue of the historic urban landscape [HUL], based on research conducted in the historic city of Jingdezhen, the center of the porcelain industry in China. This study points out that the existing HUL concept, which can be traced back to the fundamental conceptual divisions set forth by western science, tends to analyze the various elements of urban heritage (composed of hybrid natural-cultural elements) by layers and ignore the nuanced connections and interweaving structure of various elements. Instead, the networking analysis approach can respond to the challenges of complex heritage networks and to the difficulties that are often faced when modern schemes of looking and thinking of landscape in the Eurocentric heritage model encounters local knowledge of Chinese settlement. The fieldwork in this paper examines the local language regarding place names and everyday uses of urban spaces, thereby highlighting heritage systems grounded in local life and indigenous knowledge. In the context of Chinese “Fengshui”, this paper demonstrates the local knowledge of nature and local intelligence of settlement location and design. This paper suggests that industrial elements (kilns, molding rooms, piers, etc.) and spiritual elements (temples for ceramic saints or water gods) are located in their intimate natural networks. Furthermore, the functional, spiritual, and natural elements are perceived as a whole and evolve as an interactive system. This paper proposes a local and cognitive approach in heritage, which was initially developed in European Landscape Convention and historic landscape characterization projects, and yet seeks a more tentative and nuanced model based on urban ethnography in a Chinese city.

Keywords: Chinese city, historic urban landscape, heritage conservation, network

Procedia PDF Downloads 136
908 Cross-Dipole Right-Hand Circularly Polarized UHF/VHF Yagi-Uda Antenna for Satellite Applications

Authors: Shativel S., Chandana B. R., Kavya B. C., Obli B. Vikram, Suganthi J., Nagendra Rao G.

Abstract:

Satellite communication plays a pivotal role in modern global communication networks, serving as a vital link between terrestrial infrastructure and remote regions. The demand for reliable satellite reception systems, especially in UHF (Ultra High Frequency) and VHF (Very High Frequency) bands, has grown significantly over the years. This research paper presents the design and optimization of a high-gain, dual-band crossed Yagi-Uda antenna in CST Studio Suite, specifically tailored for satellite reception. The proposed antenna system incorporates a circularly polarized (Right-Hand Circular Polarization - RHCP) design to reduce Faraday loss. Our aim was to use fewer elements and achieve gain, so the antenna is constructed using 6x2 elements arranged in cross dipole and supported with a boom. We have achieved 10.67dBi at 146MHz and 9.28dBi at 437.5MHz.The process includes parameter optimization and fine-tuning of the Yagi-Uda array’s elements, such as the length and spacing of directors and reflectors, to achieve high gain and desirable radiation patterns. Furthermore, the optimization process considers the requirements for UHF and VHF frequency bands, ensuring broad frequency coverage for satellite reception. The results of this research are anticipated to significantly contribute to the advancement of satellite reception systems, enhancing their capabilities to reliably connect remote and underserved areas to the global communication network. Through innovative antenna design and simulation techniques, this study seeks to provide a foundation for the development of next-generation satellite communication infrastructure.

Keywords: Yagi-Uda antenna, RHCP, gain, UHF antenna, VHF antenna, CST, radiation pattern.

Procedia PDF Downloads 57
907 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis

Authors: Abeer A. Aljohani

Abstract:

COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred to as coronavirus, which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. This research aims to predict COVID-19 disease in its initial stage to reduce the death count. Machine learning (ML) is nowadays used in almost every area. Numerous COVID-19 cases have produced a huge burden on the hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease is based on the symptoms and medical history of the patient. This research presents a unique architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard UCI dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques to the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and the principal component analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, receiver operating characteristic (ROC), and area under curve (AUC). The results depict that decision tree, random forest, and neural networks outperform all other state-of-the-art ML techniques. This achieved result can help effectively in identifying COVID-19 infection cases.

Keywords: supervised machine learning, COVID-19 prediction, healthcare analytics, random forest, neural network

Procedia PDF Downloads 87
906 Multiple-Channel Piezoelectric Actuated Tunable Optical Filter for WDM Application

Authors: Hailu Dessalegn, T. Srinivas

Abstract:

We propose new multiple-channel piezoelectric (PZT) actuated tunable optical filter based on racetrack multi-ring resonators for wavelength de-multiplexing network applications. We design tunable eight-channel wavelength de-multiplexer consisting of eight cascaded PZT actuated tunable multi-ring resonator filter with a channel spacing of 1.6 nm. The filter for each channel is basically structured on a suspended beam, sandwiched with piezoelectric material and built in integrated ring resonators which are placed on the middle of the beam to gain uniform stress and linearly varying longitudinal strain. A reference single mode serially coupled multi stage racetrack ring resonator with the same radii and coupling length is designed with a line width of 0.8974 nm with a flat top pass band at 1dB of 0.5205 nm and free spectral range of about 14.9 nm. In each channel, a small change in the perimeter of the rings is introduced to establish the shift in resonance wavelength as per the defined channel spacing. As a result, when a DC voltage is applied, the beams will elongate, which involves mechanical deformation of the ring resonators that induces a stress and a strain, which brings a change in refractive index and perimeter of the rings leading to change in the output spectrum shift providing the tunability of central wavelength in each channel. Simultaneous wave length shift as high as 45.54 pm/V has been achieved with negligible tunability variation in the eight channel tunable optical filter proportional to the DC voltage applied in the structure, and it is capable of tuning up to 3.45 nm in each channel with a maximum loss difference of 0.22 dB in the tuning range and out of band rejection ratio of 35 dB, with a low channel crosstalk ≤ 30 dB.

Keywords: optical MEMS, piezoelectric (PZT) actuation, tunable optical filter, wavelength de-multiplexer

Procedia PDF Downloads 432
905 Effects of Environmental and Genetic Factors on Growth Performance, Fertility Traits and Milk Yield/Composition in Saanen Goats

Authors: Deniz Dincel, Sena Ardicli, Hale Samli, Mustafa Ogan, Faruk Balci

Abstract:

The aim of the study was to determine the effects of some environmental and genetic factors on growth, fertility traits, milk yield and composition in Saanen goats. For this purpose, the total of 173 Saanen goats and kids were investigated for growth, fertility and milk traits in Marmara Region of Turkey. Fertility parameters (n=70) were evaluated during two years. Milk samples were collected during the lactation and the milk yield/components (n=59) of each goat were calculated. In terms of CSN3 and AGPAT6 gene; the genotypes were defined by PCR-RFLP. Saanen kids (n=86-112) were measured from birth to 6 months of life. The birth, weaning, 60ᵗʰ, 90ᵗʰ, 120ᵗʰ and 180tᵗʰ days of average live weights were calculated. The effects of maternal age on pregnancy rate (p < 0.05), birth rate (p < 0.05), infertility rate (p < 0.05), single born kidding (p < 0.001), twinning rate (p < 0.05), triplet rate (p < 0.05), survival rate of kids until weaning (p < 0.05), number of kids per parturition (p < 0.01) and number of kids per mating (p < 0.01) were found significant. The impacts of year on birth rate (p < 0.05), abortion rate (p < 0.001), single born kidding (p < 0.01), survival rate of kids until weaning (p < 0.01), number of kids per mating (p < 0.01) were found significant for fertility traits. The impacts of lactation length on all milk yield parameters (lactation milk, protein, fat, totally solid, solid not fat, casein and lactose yield) (p < 0.001) were found significant. The effects of age on all milk yield parameters (lactation milk, protein, fat, total solid, solid not fat, casein and lactose yield) (p < 0.001), protein rate (p < 0.05), fat rate (p < 0.05), total solid rate (p < 0.01), solid not fat rate (p < 0.05), casein rate (p < 0.05) and lactation length (p < 0.01), were found significant too. However, the effect of AGPAT6 gene on milk yield and composition was not found significant in Saanen goats. The herd was found monomorphic (FF) for CSN3 gene. The effects of sex on live weights until 90ᵗʰ days of life (birth, weaning and 60ᵗʰ day of average weight) were found significant statistically (p < 0.001). The maternal age affected only birth weight (p < 0,001). The effects month at birth on all of the investigated day [the birth, 120ᵗʰ, 180ᵗʰ days (p < 0.05); the weaning, 60ᵗʰ, 90ᵗʰ days (p < 0,001)] were found significant. The birth type was found significant on the birth (p < 0,001), weaning (p < 0,01), 60ᵗʰ (p < 0,01) and 90ᵗʰ (p < 0,01) days of average live weights. As a result, screening the other regions of CSN3, AGPAT6 gene and also investigation the phenotypic association of them should be useful to clarify the efficiency of target genes. Environmental factors such as maternal age, year, sex and birth type were found significant on some growth, fertility and milk traits in Saanen goats. So consideration of these factors could be used as selection criteria in dairy goat breeding.

Keywords: fertility, growth, milk yield, Saanen goats

Procedia PDF Downloads 160
904 Experiences of Homophobia, Machismo and Misogyny in Tourist Destinations: A Netnography in a Facebook Community of LGBT Backpackers

Authors: Renan De Caldas Honorato, Ana Augusta Ferreira De Freitas

Abstract:

Homosexuality is still criminalized in a large number of countries. In some of them, being gay or lesbian can even be punished by death. Added to this context, the experiences of social discrimination faced by the LGBT population, including homophobia, machismo and misogyny, cause numerous restrictions throughout their lives. The possibility of confronting these challenges in moments that should be pleasant, such as on a trip or on vacation, is unpleasant, to say the least. In the current scenario of intensifying the use of Social network sites (SNSs) to search for information, including in the tourist area, this work aims to analyze the sharing of tourist experiences with situations of confrontation and perceptions of homophobia, machismo and misogyny, and restrictions suffered in tourist destinations. The fieldwork is a community of LGBT backpackers based on Facebook. Netnography was the core method adopted. A qualitative approach was conducted and 463 publications posted from January to December 2020 were assessed through the computer-mediated discourse analysis (CMDA). The results suggest that these publications exist to identify the potential exposure to these offensive behaviors while traveling. Individuals affirm that the laws, positive or not, in relation to the LGBT public are not the only factors for a place to be defined as safe or not for gay travelers. The social situation of a country and its laws are quite different and this is the main target of these publications. The perception of others about the chosen destination is more important than knowing your rights and the legal status of each country and it also lessens uncertainty, even when they are never totally confident when choosing a travel destination. In certain circumstances, sexual orientation also needs to be protected from the judgment of hosts and residents. The systemic treatment of homophobic behavior and the construction of a more inclusive society are urgent.

Keywords: homophobia, hospitality, machismo, misogyny

Procedia PDF Downloads 185
903 Building Transparent Supply Chains through Digital Tracing

Authors: Penina Orenstein

Abstract:

In today’s world, particularly with COVID-19 a constant worldwide threat, organizations need greater visibility over their supply chains more than ever before, in order to find areas for improvement and greater efficiency, reduce the chances of disruption and stay competitive. The concept of supply chain mapping is one where every process and route is mapped in detail between each vendor and supplier. The simplest method of mapping involves sourcing publicly available data including news and financial information concerning relationships between suppliers. An additional layer of information would be disclosed by large, direct suppliers about their production and logistics sites. While this method has the advantage of not requiring any input from suppliers, it also doesn’t allow for much transparency beyond the first supplier tier and may generate irrelevant data—noise—that must be filtered out to find the actionable data. The primary goal of this research is to build data maps of supply chains by focusing on a layered approach. Using these maps, the secondary goal is to address the question as to whether the supply chain is re-engineered to make improvements, for example, to lower the carbon footprint. Using a drill-down approach, the end result is a comprehensive map detailing the linkages between tier-one, tier-two, and tier-three suppliers super-imposed on a geographical map. The driving force behind this idea is to be able to trace individual parts to the exact site where they’re manufactured. In this way, companies can ensure sustainability practices from the production of raw materials through the finished goods. The approach allows companies to identify and anticipate vulnerabilities in their supply chain. It unlocks predictive analytics capabilities and enables them to act proactively. The research is particularly compelling because it unites network science theory with empirical data and presents the results in a visual, intuitive manner.

Keywords: data mining, supply chain, empirical research, data mapping

Procedia PDF Downloads 170
902 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 137
901 Project Production Control (PPC) Implementation for an Offshore Facilities Construction Project

Authors: Muhammad Hakim Bin Mat Tasir, Erwan Shahfizad Hasidan, Hamidah Makmor Bakry, M. Hafiz B. Izhar

Abstract:

Every key performance indicator used to monitor a project’s construction progress emphasizes trade productivity or specific commodity run-down curves. Examples include the productivity of welding by the number of joints completed per day, quantity of NDT (Non-Destructive Tests) inspection per day, etc. This perspective is based on progress and productivity; however, it does not enable a system perspective of how we produce. This paper uses a project production system perspective by which projects are a collection of production systems comprising the interconnected network of processes and operations that represent all the work activities to execute a project from start to finish. Furthermore, it also uses the 5 Levels of production system optimization as a frame. The goal of the paper is to describe the application of Project Production Control (PPC) to control and improve the performance of several production processes associated with the fabrication and assembly of a Central Processing Platform (CPP) Jacket, part of an offshore mega project. More specifically, the fabrication and assembly of buoyancy tanks as they were identified as part of the critical path and required the highest demand for capacity. In total, seven buoyancy tanks were built, with a total estimated weight of 2,200 metric tons. These huge buoyancy tanks were designed to be reversed launching and self-upending of the jacket, easily retractable, and reusable for the next project, ensuring sustainability. Results showed that an effective application of PPC not only positively impacted construction progress and productivity but also exposed sources of detrimental variability as the focus of continuous improvement practices. This approach augmented conventional project management practices, and the results had a high impact on construction scheduling, planning, and control.

Keywords: offshore, construction, project management, sustainability

Procedia PDF Downloads 53
900 Analysing the Moderating Effect of Customer Loyalty on Long Run Repurchase Intentions

Authors: John Akpesiri Olotewo

Abstract:

One of the controversies in existing marketing literatures is on how to retain existing and new customers to have repurchase intention in the long-run; however, empirical answer to this question is scanty in existing studies. Thus, this study investigates the moderating effect of consumer loyalty on long-run repurchase intentions in telecommunication industry using Lagos State environs. The study adopted field survey research design using questionnaire to elicit responses from 250 respondents who were selected using random and stratified random sampling techniques from the telecommunication industry in Lagos State, Nigeria. The internal consistency of the research instrument was verified using the Cronbach’s alpha, the result of 0.89 implies the acceptability of the internal consistency of the survey instrument. The test of the research hypotheses were analyzed using Pearson Product Method of Correlation (PPMC), simple regression analysis and inferential statistics with the aid of Statistical Package for Social Science version 20.0 (SPSS). The study confirmed that customer satisfaction has a significant relationship with customer loyalty in the telecommunication industry; also Service quality has a significant relationship with customer loyalty to a brand; loyalty programs have a significant relationship with customer loyalty to a network operator in Nigeria and Customer loyalty has a significant effect on the long run repurchase intentions of the customer. The study concluded that one of the determinants of long term profitability of a business entity is the long run repurchase intentions of its customers which hinges on the level of brand loyalty of the customer. Thus, it was recommended that service providers in Nigeria should improve on factors like customer satisfaction, service quality, and loyalty programs in order to increase the loyalty of their customer to their brands thereby increasing their repurchase intentions.

Keywords: customer loyalty, long run repurchase intentions, brands, service quality and customer satisfaction

Procedia PDF Downloads 229
899 Reimagining the Management of Telco Supply Chain with Blockchain

Authors: Jeaha Yang, Ahmed Khan, Donna L. Rodela, Mohammed A. Qaudeer

Abstract:

Traditional supply chain silos still exist today due to the difficulty of establishing trust between various partners and technological barriers across industries. Companies lose opportunities and revenue and inadvertently make poor business decisions resulting in further challenges. Blockchain technology can bring a new level of transparency through sharing information with a distributed ledger in a decentralized manner that creates a basis of trust for business. Blockchain is a loosely coupled, hub-style communication network in which trading partners can work indirectly with each other for simpler integration, but they work together through the orchestration of their supply chain operations under a coherent process that is developed jointly. A Blockchain increases efficiencies, lowers costs, and improves interoperability to strengthen and automate the supply chain management process while all partners share the risk. Blockchain ledger is built to track inventory lifecycle for supply chain transparency and keeps a journal of inventory movement for real-time reconciliation. State design patterns are used to capture the life cycle (behavior) of inventory management as a state machine for a common, transparent and coherent process which creates an opportunity for trading partners to become more responsive in terms of changes or improvements in process, reconcile discrepancies, and comply with internal governance and external regulations. It enables end-to-end, inter-company visibility at the unit level for more accurate demand planning with better insight into order fulfillment and replenishment.

Keywords: supply chain management, inventory trace-ability, perpetual inventory system, inventory lifecycle, blockchain, inventory consignment, supply chain transparency, digital thread, demand planning, hyper ledger fabric

Procedia PDF Downloads 87
898 The Impact of PM-Based Regulations on the Concentration and Sources of Fine Organic Carbon in the Los Angeles Basin from 2005 to 2015

Authors: Abdulmalik Altuwayjiri, Milad Pirhadi, Sina Taghvaee, Constantinos Sioutas

Abstract:

A significant portion of PM₂.₅ mass concentration is carbonaceous matter (CM), which majorly exists in the form of organic carbon (OC). Ambient OC originates from a multitude of sources and plays an important role in global climate effects, visibility degradation, and human health. In this study, positive matrix factorization (PMF) was utilized to identify and quantify the long-term contribution of PM₂.₅ sources to total OC mass concentration in central Los Angeles (CELA) and Riverside (i.e., receptor site), using the chemical speciation network (CSN) database between 2005 and 2015, a period during which several state and local regulations on tailpipe emissions were implemented in the area. Our PMF resolved five different factors, including tailpipe emissions, non-tailpipe emissions, biomass burning, secondary organic aerosol (SOA), and local industrial activities for both sampling sites. The contribution of vehicular exhaust emissions to the OC mass concentrations significantly decreased from 3.5 µg/m³ in 2005 to 1.5 µg/m³ in 2015 (by about 58%) at CELA, and from 3.3 µg/m³ in 2005 to 1.2 µg/m³ in 2015 (by nearly 62%) at Riverside. Additionally, SOA contribution to the total OC mass, showing higher levels at the receptor site, increased from 23% in 2005 to 33% and 29% in 2010 and 2015, respectively, in Riverside, whereas the corresponding contribution at the CELA site was 16%, 21% and 19% during the same period. The biomass burning maintained an almost constant relative contribution over the whole period. Moreover, while the adopted regulations and policies were very effective at reducing the contribution of tailpipe emissions, they have led to an overall increase in the fractional contributions of non-tailpipe emissions to total OC in CELA (about 14%, 28%, and 28% in 2005, 2010 and 2015, respectively) and Riverside (22%, 27% and 26% in 2005, 2010 and 2015), underscoring the necessity to develop equally effective mitigation policies targeting non-tailpipe PM emissions.

Keywords: PM₂.₅, organic carbon, Los Angeles megacity, PMF, source apportionment, non-tailpipe emissions

Procedia PDF Downloads 195
897 Study of Silent Myocardial Ischemia in Type 2 Diabeic Males: Egyptian Experience

Authors: Ali Kassem, Yhea Kishik, Ali Hassan, Mohamed Abdelwahab

Abstract:

Introduction: Accelerated coronary and peripheral vascular atherosclerosis is one of the most common and chronic complications of diabetes mellitus. A recent aspect of coronary artery disease in this condition is its silent nature. The aim of the work: Detection of the prevalence of silent myocardial ischemia (SMI) in Upper Egypt type 2 diabetic males and to select male diabetic population who should be screened for SMI. Patients and methods: 100 type 2 diabetic male patients with a negative history of angina or anginal equivalent symptoms and 30 healthy control were included. Full medical history and thorough clinical examination were done for all participants. Fasting and post prandial blood glucose level, lipid profile, (HbA1c), microalbuminuria, and C-reactive protein were done for all participants Resting ECG, trans-thoracic echocardiography, treadmill exercise ECG, myocardial perfusion imaging were done for all participants and patients positive for one or more NITs were subjected for coronary angiography. Results Twenty nine patients (29%) were positive for one or more NITs in the patients group compared to only one case (3.3%) in the controls. After coronary angiography, 20 patients were positive for significant coronary artery stenosis in the patients group, while it was refused to be done by the patient in the controls. There were statistical significant difference between the two groups regarding, hypertension, dyslipidemia and obesity, family history of DM and IHD with higher levels of microalbuminuria, C-reactive protein, total lipids in patient group versus controls According to coronary angiography, patients were subdivided into two subgroups, 20 positive for SMI (positive for coronary angiography) and 80 negative for SMI (negative for coronary angiography). No statistical difference regarding family history of DM and type of diabetic therapy was found between the two subgroups. Yet, smoking, hypertension, obesity, dyslipidemia and family history of IHD were significantly higher in diabetics positive versus those negative for SMI. 90% of patients in subgroup positive for SMI had two or more cardiac risk factors while only two patients had one cardiac risk factor (10%). Uncontrolled DM was detected more in patients positive for SMI. Diabetic complications were more prevalent in patients positive for SMI versus those negative for SMI. Most of the patients positive for SMI have DM more than 5 years duration. Resting ECG and resting Echo detected only 6 and 11 cases, respectively, of the 20 positive cases in group positive for SMI compared to treadmill exercise ECG and myocardial perfusion imaging that detected 16 and 18 cases respectively, Conclusion: Type 2 diabetic male patients should be screened for detection of SMI when aged above 50 years old, diabetes duration is more than 5 years, presence of two or more cardiac risk factors and/or patients suffering from one or more of the chronic diabetic complications. CRP, is an important parameter for selection of type 2 diabetic male patients who should be screened for SMI. Non invasive cardiac tests are reliable for screening of SMI in these patients in our locality.

Keywords: C-reactive protein, Silent myocardial ischemia, Stress tests, type 2 DM

Procedia PDF Downloads 379
896 Hybrid versus Cemented Fixation in Total Knee Arthroplasty: Mid-Term Follow-Up

Authors: Pedro Gomes, Luís Sá Castelo, António Lopes, Marta Maio, Pedro Mota, Adélia Avelar, António Marques Dias

Abstract:

Introduction: Total Knee Arthroplasty (TKA) has contributed to improvement of patient`s quality of life, although it has been associated with some complications including component loosening and polyethylene wear. To prevent these complications various fixation techniques have been employed. Hybrid TKA with cemented tibial and cementless femoral components have shown favourable outcomes, although it still lack of consensus in the literature. Objectives: To evaluate the clinical and radiographic results of hybrid versus cemented TKA with an average 5 years follow-up and analyse the survival rates. Methods: A retrospective study of 125 TKAs performed in 92 patients at our institution, between 2006 to 2008, with a minimum follow-up of 2 years. The same prosthesis was used in all knees. Hybrid TKA fixation was performed in 96 knees, with a mean follow-up of 4,8±1,7 years (range, 2–8,3 years) and 29 TKAs received fully cemented fixation with a mean follow-up of 4,9±1,9 years (range, 2-8,3 years). Selection for hybrid fixation was nonrandomized and based on femoral component fit. The Oxford Knee Score (OKS 0-48) was evaluated for clinical assessment and Knee Society Roentgenographic Evaluation Scoring System was used for radiographic outcome. The survival rate was calculated using the Kaplan-Meier method, with failures defined as revision of either the tibial or femoral component for aseptic failures and all-causes (aseptic and infection). Analysis of survivorship data was performed using the log-rank test. SPSS (v22) was the computer program used for statistical analysis. Results: The hybrid group consisted of 72 females (75%) and 24 males (25%), with mean age 64±7 years (range, 50-78 years). The preoperative diagnosis was osteoarthritis (OA) in 94 knees (98%), rheumatoid arthritis (RA) in 1 knee (1%) and Posttraumatic arthritis (PTA) in 1 Knee (1%). The fully cemented group consisted of 23 females (79%) and 6 males (21%), with mean age 65±7 years (range, 47-78 years). The preoperative diagnosis was OA in 27 knees (93%), PTA in 2 knees (7%). The Oxford Knee Scores were similar between the 2 groups (hybrid 40,3±2,8 versus cemented 40,2±3). The percentage of radiolucencies seen on the femoral side was slightly higher in the cemented group 20,7% than the hybrid group 11,5% p0.223. In the cemented group there were significantly more Zone 4 radiolucencies compared to the hybrid group (13,8% versus 2,1% p0,026). Revisions for all causes were performed in 4 of the 96 hybrid TKAs (4,2%) and 1 of the 29 cemented TKAs (3,5%). The reason for revision was aseptic loosening in 3 hybrid TKAs and 1 of the cemented TKAs. Revision was performed for infection in 1 hybrid TKA. The hybrid group demonstrated a 7 years survival rate of 93% for all-cause failures and 94% for aseptic loosening. No significant difference in survivorship was seen between the groups for all-cause failures or aseptic failures. Conclusions: Hybrid TKA yields similar intermediate-term results and survival rates as fully cemented total knee arthroplasty and remains a viable option in knee joint replacement surgery.

Keywords: hybrid, survival rate, total knee arthroplasty, orthopaedic surgery

Procedia PDF Downloads 586
895 Applications of Digital Tools, Satellite Images and Geographic Information Systems in Data Collection of Greenhouses in Guatemala

Authors: Maria A. Castillo H., Andres R. Leandro, Jose F. Bienvenido B.

Abstract:

During the last 20 years, the globalization of economies, population growth, and the increase in the consumption of fresh agricultural products have generated greater demand for ornamentals, flowers, fresh fruits, and vegetables, mainly from tropical areas. This market situation has demanded greater competitiveness and control over production, with more efficient protected agriculture technologies, which provide greater productivity and allow us to guarantee the quality and quantity that is required in a constant and sustainable way. Guatemala, located in the north of Central America, is one of the largest exporters of agricultural products in the region and exports fresh vegetables, flowers, fruits, ornamental plants, and foliage, most of which were grown in greenhouses. Although there are no official agricultural statistics on greenhouse production, several thesis works, and congress reports have presented consistent estimates. A wide range of protection structures and roofing materials are used, from the most basic and simple ones for rain control to highly technical and automated structures connected with remote sensors for monitoring and control of crops. With this breadth of technological models, it is necessary to analyze georeferenced data related to the cultivated area, to the different existing models, and to the covering materials, integrated with altitude, climate, and soil data. The georeferenced registration of the production units, the data collection with digital tools, the use of satellite images, and geographic information systems (GIS) provide reliable tools to elaborate more complete, agile, and dynamic information maps. This study details a methodology proposed for gathering georeferenced data of high protection structures (greenhouses) in Guatemala, structured in four phases: diagnosis of available information, the definition of the geographic frame, selection of satellite images, and integration with an information system geographic (GIS). It especially takes account of the actual lack of complete data in order to obtain a reliable decision-making system; this gap is solved through the proposed methodology. A summary of the results is presented in each phase, and finally, an evaluation with some improvements and tentative recommendations for further research is added. The main contribution of this study is to propose a methodology that allows to reduce the gap of georeferenced data in protected agriculture in this specific area where data is not generally available and to provide data of better quality, traceability, accuracy, and certainty for the strategic agricultural decision öaking, applicable to other crops, production models and similar/neighboring geographic areas.

Keywords: greenhouses, protected agriculture, GIS, Guatemala, satellite image, digital tools, precision agriculture

Procedia PDF Downloads 188
894 Analyzing Concrete Structures by Using Laser Induced Breakdown Spectroscopy

Authors: Nina Sankat, Gerd Wilsch, Cassian Gottlieb, Steven Millar, Tobias Guenther

Abstract:

Laser-Induced Breakdown Spectroscopy (LIBS) is a combination of laser ablation and optical emission spectroscopy, which in principle can simultaneously analyze all elements on the periodic table. Materials can be analyzed in terms of chemical composition in a two-dimensional, time efficient and minor destructive manner. These advantages predestine LIBS as a monitoring technique in the field of civil engineering. The decreasing service life of concrete infrastructures is a continuously growing problematic. A variety of intruding, harmful substances can damage the reinforcement or the concrete itself. To insure a sufficient service life a regular monitoring of the structure is necessary. LIBS offers many applications to accomplish a successful examination of the conditions of concrete structures. A selection of those applications are the 2D-evaluation of chlorine-, sodium- and sulfur-concentration, the identification of carbonation depths and the representation of the heterogeneity of concrete. LIBS obtains this information by using a pulsed laser with a short pulse length (some mJ), which is focused on the surfaces of the analyzed specimen, for this only an optical access is needed. Because of the high power density (some GW/cm²) a minimal amount of material is vaporized and transformed into a plasma. This plasma emits light depending on the chemical composition of the vaporized material. By analyzing the emitted light, information for every measurement point is gained. The chemical composition of the scanned area is visualized in a 2D-map with spatial resolutions up to 0.1 mm x 0.1 mm. Those 2D-maps can be converted into classic depth profiles, as typically seen for the results of chloride concentration provided by chemical analysis like potentiometric titration. However, the 2D-visualization offers many advantages like illustrating chlorine carrying cracks, direct imaging of the carbonation depth and in general allowing the separation of the aggregates from the cement paste. By calibrating the LIBS-System, not only qualitative but quantitative results can be obtained. Those quantitative results can also be based on the cement paste, while excluding the aggregates. An additional advantage of LIBS is its mobility. By using the mobile system, located at BAM, onsite measurements are feasible. The mobile LIBS-system was already used to obtain chloride, sodium and sulfur concentrations onsite of parking decks, bridges and sewage treatment plants even under hard conditions like ongoing construction work or rough weather. All those prospects make LIBS a promising method to secure the integrity of infrastructures in a sustainable manner.

Keywords: concrete, damage assessment, harmful substances, LIBS

Procedia PDF Downloads 173
893 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption

Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko

Abstract:

Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.

Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.

Procedia PDF Downloads 103
892 Functionalized Nano porous Ceramic Membranes for Electrodialysis Treatment of Harsh Wastewater

Authors: Emily Rabe, Stephanie Candelaria, Rachel Malone, Olivia Lenz, Greg Newbloom

Abstract:

Electrodialysis (ED) is a well-developed technology for ion removal in a variety of applications. However, many industries generate harsh wastewater streams that are incompatible with traditional ion exchange membranes. Membrion® has developed novel ceramic-based ion exchange membranes (IEMs) offering several advantages over traditional polymer membranes: high performance in low pH, chemical resistance to oxidizers, and a rigid structure that minimizes swelling. These membranes are synthesized with our patented silane-based sol-gel techniques. The pore size, shape, and network structure are engineered through a molecular self-assembly process where thermodynamic driving forces are used to direct where and how pores form. Either cationic or anionic groups can be added within the membrane nanopore structure to create cation- and anion-exchange membranes. The ceramic IEMs are produced on a roll-to-roll manufacturing line with low-temperature processing. Membrane performance testing is conducted using in-house permselectivity, area-specific resistance, and ED stack testing setups. Ceramic-based IEMs show comparable performance to traditional IEMs and offer some unique advantages. Long exposure to highly acidic solutions has a negligible impact on ED performance. Additionally, we have observed stable performance in the presence of strong oxidizing agents such as hydrogen peroxide. This stability is expected, as the ceramic backbone of these materials is already in a fully oxidized state. This data suggests ceramic membranes, made using sol-gel chemistry, could be an ideal solution for acidic and/or oxidizing wastewater streams from processes such as semiconductor manufacturing and mining.

Keywords: ion exchange, membrane, silane chemistry, nanostructure, wastewater

Procedia PDF Downloads 81
891 Detect Critical Thinking Skill in Written Text Analysis. The Use of Artificial Intelligence in Text Analysis vs Chat/Gpt

Authors: Lucilla Crosta, Anthony Edwards

Abstract:

Companies and the market place nowadays struggle to find employees with adequate skills in relation to anticipated growth of their businesses. At least half of workers will need to undertake some form of up-skilling process in the next five years in order to remain aligned with the requests of the market . In order to meet these challenges, there is a clear need to explore the potential uses of AI (artificial Intelligence) based tools in assessing transversal skills (critical thinking, communication and soft skills of different types in general) of workers and adult students while empowering them to develop those same skills in a reliable trustworthy way. Companies seek workers with key transversal skills that can make a difference between workers now and in the future. However, critical thinking seems to be the one of the most imprtant skill, bringing unexplored ideas and company growth in business contexts. What employers have been reporting since years now, is that this skill is lacking in the majority of workers and adult students, and this is particularly visible trough their writing. This paper investigates how critical thinking and communication skills are currently developed in Higher Education environments through use of AI tools at postgraduate levels. It analyses the use of a branch of AI namely Machine Learning and Big Data and of Neural Network Analysis. It also examines the potential effect the acquisition of these skills through AI tools and what kind of effects this has on employability This paper will draw information from researchers and studies both at national (Italy & UK) and international level in Higher Education. The issues associated with the development and use of one specific AI tool Edulai, will be examined in details. Finally comparisons will be also made between these tools and the more recent phenomenon of Chat GPT and forthcomings and drawbacks will be analysed.

Keywords: critical thinking, artificial intelligence, higher education, soft skills, chat GPT

Procedia PDF Downloads 101
890 Molecular Dynamics Simulation Study of the Influence of Potassium Salts on the Adsorption and Surface Hydration Inhibition Performance of Hexane, 1,6 - Diamine Clay Mineral Inhibitor onto Sodium Montmorillonite

Authors: Justine Kiiza, Xu Jiafang

Abstract:

The world’s demand for energy is increasing rapidly due to population growth and a reduction in shallow conventional oil and gas reservoirs, resorting to deeper and mostly unconventional reserves like shale oil and gas. Most shale formations contain a large amount of expansive sodium montmorillonite (Na-Mnt), due to high water adsorption, hydration, and when the drilling fluid filtrate enters the formation with high Mnt content, the wellbore wall can be unstable due to hydration and swelling, resulting to shrinkage, sticking, balling, time wasting etc., and well collapse in extreme cases causing complex downhole accidents and high well costs. Recently, polyamines like 1, 6 – hexane diamine (HEDA) have been used as typical drilling fluid shale inhibitors to minimize and/or cab clay mineral swelling and maintain the wellbore stability. However, their application is limited to shallow drilling due to their sensitivity to elevated temperature and pressure. Inorganic potassium salts i.e., KCl, have long been applied for restriction of shale formation hydration expansion in deep wells, but their use is limited due to toxicity. Understanding the adsorption behaviour of HEDA on Na-Mnt surfaces in present of organo-salts, organic K-salts e.g., HCO₂K - main component of organo-salt drilling fluid, is of great significance in explaining the inhibitory performance of polyamine inhibitors. Molecular dynamic simulations (MD) were applied to investigate the influence of HCO₂K and KCl on the adsorption mechanism of HEDA on the Na-Mnt surface. Simulation results showed that adsorption configurations of HEDA are mainly by terminal amine groups with a flat-lying alkyl hydrophobic chain. Its interaction with the clay surface decreased the H-bond number between H₂O-clay and neutralized the negative charge of the Mnt surface, thus weakening the surface hydration ability of Na-Mnt. The introduction of HCO₂K greatly improved inhibition ability, coordination of interlayer ions with H₂O as they were replaced by K+, and H₂O-HCOO- coordination reduced H₂O-Mnt interactions, mobility and transport capability of H₂O molecules were more decreased. While KCl showed little ability and also caused more hydration with time, HCO₂K can be used as an alternative for offshore drilling instead of toxic KCl, with a maximum concentration noted in this study as 1.65 wt%. This study provides a theoretical elucidation for the inhibition mechanism and adsorption characteristics of HEDA inhibitor on Na-Mnt surfaces in the presence of K+-salts and may provide more insight into the evaluation, selection, and molecular design of new clay-swelling high-performance WBDF systems used in oil and gas complex offshore drilling well sections.

Keywords: shale, hydration, inhibition, polyamines, organo-salts, simulation

Procedia PDF Downloads 37
889 Application of a Confirmatory Composite Model for Assessing the Extent of Agricultural Digitalization: A Case of Proactive Land Acquisition Strategy (PLAS) Farmers in South Africa

Authors: Mazwane S., Makhura M. N., Ginege A.

Abstract:

Digitalization in South Africa has received considerable attention from policymakers. The support for the development of the digital economy by the South African government has been demonstrated through the enactment of various national policies and strategies. This study sought to develop an index for agricultural digitalization by applying composite confirmatory analysis (CCA). Another aim was to determine the factors that affect the development of digitalization in PLAS farms. Data on the indicators of the three dimensions of digitalization were collected from 300 Proactive Land Acquisition Strategy (PLAS) farms in South Africa using semi-structured questionnaires. Confirmatory composite analysis (CCA) was employed to reduce the items into three digitalization dimensions and ultimately to a digitalization index. Standardized digitalization index scores were extracted and fitted to a linear regression model to determine the factors affecting digitalization development. The results revealed that the model shows practical validity and can be used to measure digitalization development as measures of fit (geodesic distance, standardized root mean square residual, and squared Euclidean distance) were all below their respective 95%quantiles of bootstrap discrepancies (HI95 values). Therefore, digitalization is an emergent variable that can be measured using CCA. The average level of digitalization in PLAS farms was 0.2 and varied significantly across provinces. The factors that significantly influence digitalization development in PLAS land reform farms were age, gender, farm type, network type, and cellular data type. This should enable researchers and policymakers to understand the level of digitalization and patterns of development, as well as correctly attribute digitalization development to the contributing factors.

Keywords: agriculture, digitalization, confirmatory composite model, land reform, proactive land acquisition strategy, South Africa

Procedia PDF Downloads 52
888 An Evolutionary Perspective on the Role of Extrinsic Noise in Filtering Transcript Variability in Small RNA Regulation in Bacteria

Authors: Rinat Arbel-Goren, Joel Stavans

Abstract:

Cell-to-cell variations in transcript or protein abundance, called noise, may give rise to phenotypic variability between isogenic cells, enhancing the probability of survival under stress conditions. These variations may be introduced by post-transcriptional regulatory processes such as non-coding, small RNAs stoichiometric degradation of target transcripts in bacteria. We study the iron homeostasis network in Escherichia coli, in which the RyhB small RNA regulates the expression of various targets as a model system. Using fluorescence reporter genes to detect protein levels and single-molecule fluorescence in situ hybridization to monitor transcripts levels in individual cells, allows us to compare noise at both transcript and protein levels. The experimental results and computer simulations show that extrinsic noise buffers through a feed-forward loop configuration the increase in variability introduced at the transcript level by iron deprivation, illuminating the important role that extrinsic noise plays during stress. Surprisingly, extrinsic noise also decouples of fluctuations of two different targets, in spite of RyhB being a common upstream factor degrading both. Thus, phenotypic variability increases under stress conditions by the decoupling of target fluctuations in the same cell rather than by increasing the noise of each. We also present preliminary results on the adaptation of cells to prolonged iron deprivation in order to shed light on the evolutionary role of post-transcriptional downregulation by small RNAs.

Keywords: cell-to-cell variability, Escherichia coli, noise, single-molecule fluorescence in situ hybridization (smFISH), transcript

Procedia PDF Downloads 159
887 Human-Machine Cooperation in Facial Comparison Based on Likelihood Scores

Authors: Lanchi Xie, Zhihui Li, Zhigang Li, Guiqiang Wang, Lei Xu, Yuwen Yan

Abstract:

Image-based facial features can be classified into category recognition features and individual recognition features. Current automated face recognition systems extract a specific feature vector of different dimensions from a facial image according to their pre-trained neural network. However, to improve the efficiency of parameter calculation, an algorithm generally reduces the image details by pooling. The operation will overlook the details concerned much by forensic experts. In our experiment, we adopted a variety of face recognition algorithms based on deep learning, compared a large number of naturally collected face images with the known data of the same person's frontal ID photos. Downscaling and manual handling were performed on the testing images. The results supported that the facial recognition algorithms based on deep learning detected structural and morphological information and rarely focused on specific markers such as stains and moles. Overall performance, distribution of genuine scores and impostor scores, and likelihood ratios were tested to evaluate the accuracy of biometric systems and forensic experts. Experiments showed that the biometric systems were skilled in distinguishing category features, and forensic experts were better at discovering the individual features of human faces. In the proposed approach, a fusion was performed at the score level. At the specified false accept rate, the framework achieved a lower false reject rate. This paper contributes to improving the interpretability of the objective method of facial comparison and provides a novel method for human-machine collaboration in this field.

Keywords: likelihood ratio, automated facial recognition, facial comparison, biometrics

Procedia PDF Downloads 123
886 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 194
885 Gnss Aided Photogrammetry for Digital Mapping

Authors: Muhammad Usman Akram

Abstract:

This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.

Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry

Procedia PDF Downloads 13
884 Investigation of Different Machine Learning Algorithms in Large-Scale Land Cover Mapping within the Google Earth Engine

Authors: Amin Naboureh, Ainong Li, Jinhu Bian, Guangbin Lei, Hamid Ebrahimy

Abstract:

Large-scale land cover mapping has become a new challenge in land change and remote sensing field because of involving a big volume of data. Moreover, selecting the right classification method, especially when there are different types of landscapes in the study area is quite difficult. This paper is an attempt to compare the performance of different machine learning (ML) algorithms for generating a land cover map of the China-Central Asia–West Asia Corridor that is considered as one of the main parts of the Belt and Road Initiative project (BRI). The cloud-based Google Earth Engine (GEE) platform was used for generating a land cover map for the study area from Landsat-8 images (2017) by applying three frequently used ML algorithms including random forest (RF), support vector machine (SVM), and artificial neural network (ANN). The selected ML algorithms (RF, SVM, and ANN) were trained and tested using reference data obtained from MODIS yearly land cover product and very high-resolution satellite images. The finding of the study illustrated that among three frequently used ML algorithms, RF with 91% overall accuracy had the best result in producing a land cover map for the China-Central Asia–West Asia Corridor whereas ANN showed the worst result with 85% overall accuracy. The great performance of the GEE in applying different ML algorithms and handling huge volume of remotely sensed data in the present study showed that it could also help the researchers to generate reliable long-term land cover change maps. The finding of this research has great importance for decision-makers and BRI’s authorities in strategic land use planning.

Keywords: land cover, google earth engine, machine learning, remote sensing

Procedia PDF Downloads 110
883 Impact of Charging PHEV at Different Penetration Levels on Power System Network

Authors: M. R. Ahmad, I. Musirin, M. M. Othman, N. A. Rahmat

Abstract:

Plug-in Hybrid-Electric Vehicle (PHEV) has gained immense popularity in recent years. PHEV offers numerous advantages compared to the conventional internal-combustion engine (ICE) vehicle. Millions of PHEVs are estimated to be on the road in the USA by 2020. Uncoordinated PHEV charging is believed to cause severe impacts to the power grid; i.e. feeders, lines and transformers overload and voltage drop. Nevertheless, improper PHEV data model used in such studies may cause the findings of their works is in appropriated. Although smart charging is more attractive to researchers in recent years, its implementation is not yet attainable on the street due to its requirement for physical infrastructure readiness and technology advancement. As the first step, it is finest to study the impact of charging PHEV based on real vehicle travel data from National Household Travel Survey (NHTS) and at present charging rate. Due to the lack of charging station on the street at the moment, charging PHEV at home is the best option and has been considered in this work. This paper proposed a technique that comprehensively presents the impact of charging PHEV on power system networks considering huge numbers of PHEV samples with its traveling data pattern. Vehicles Charging Load Profile (VCLP) is developed and implemented in IEEE 30-bus test system that represents a portion of American Electric Power System (Midwestern US). Normalization technique is used to correspond to real time loads at all buses. Results from the study indicated that charging PHEV using opportunity charging will have significant impacts on power system networks, especially whereas bigger battery capacity (kWh) is used as well as for higher penetration level.

Keywords: plug-in hybrid electric vehicle, transportation electrification, impact of charging PHEV, electricity demand profile, load profile

Procedia PDF Downloads 279
882 Toxic Masculinity as Dictatorship: Gender and Power Struggles in Tomás Eloy Martínez´s Novels

Authors: Mariya Dzhyoyeva

Abstract:

In the present paper, I examine manifestations of toxic masculinity in the novels by Tomás Eloy Martínez, a post-Boom author, journalist, literary critic, and one of the representatives of the Argentine writing diaspora. I focus on the analysis of Martínez´s characters that display hypermasculine traits to define the relationship between toxic masculinity and power, including the power of authorship and violence as they are represented in his novels. The analysis reveals a complex network in which gender, power, and violence are intertwined and influence and modify each other. As the author exposes toxic masculine behaviors that generate violence, he looks to undermine them. Departing from M. Kimmel´s idea of masculinity as homophobia, I examine how Martínez “outs” his characters by incorporating into the narrative some secret, privileged sources that provide alternative accounts of their otherwise hypermasculine lives. These background stories expose their “weaknesses,” both physical and mental, and thereby feminize them in their own eyes. In a similar way, the toxic masculinity of the fictional male author that wields his power by abusing the written word as he abuses the female character in the story is exposed as a complex of insecurities accumulated by the character due to his childhood trauma. The artistic technique that Martínez uses to condemn the authoritarian male behavior is accessing his subjectivity and subverting it through a multiplicity of identities. Martínez takes over the character’s “I” and turns it into a host of pronouns with a constantly shifting point of reference that distorts not only the notions of gender but also the very notion of identity. In doing so, he takes the character´s affirmation of masculinity to the limit where the very idea of it becomes unsustainable. Viewed in the context of Martínez´s own exilic story, the condemnation of toxic masculine power turns into the condemnation of dictatorship and authoritarianism.

Keywords: gender, masculinity., toxic masculinity, authoritarian, Argentine literature, Martínez

Procedia PDF Downloads 61
881 Comparison of Two Neural Networks To Model Margarine Age And Predict Shelf-Life Using Matlab

Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien

Abstract:

The present study was aimed at developing & comparing two neural-network-based predictive models to predict shelf-life/product age of South African margarine using free fatty acid (FFA), water droplet size (D3.3), water droplet distribution (e-sigma), moisture content, peroxide value (PV), anisidine valve (AnV) and total oxidation (totox) value as input variables to the model. Brick margarine products which had varying ages ranging from fresh i.e. week 0 to week 47 were sourced. The brick margarine products which had been stored at 10 & 25 °C and were characterized. JMP and MATLAB models to predict shelf-life/ margarine age were developed and their performances were compared. The key performance indicators to evaluate the model performances were correlation coefficient (CC), root mean square error (RMSE), and mean absolute percentage error (MAPE) relative to the actual data. The MATLAB-developed model showed a better performance in all three performance indicators. The correlation coefficient of the MATLAB model was 99.86% versus 99.74% for the JMP model, the RMSE was 0.720 compared to 1.005 and the MAPE was 7.4% compared to 8.571%. The MATLAB model was selected to be the most accurate, and then, the number of hidden neurons/ nodes was optimized to develop a single predictive model. The optimized MATLAB with 10 neurons showed a better performance compared to the models with 1 & 5 hidden neurons. The developed models can be used by margarine manufacturers, food research institutions, researchers etc, to predict shelf-life/ margarine product age, optimize addition of antioxidants, extend shelf-life of products and proactively troubleshoot for problems related to changes which have an impact on shelf-life of margarine without conducting expensive trials.

Keywords: margarine shelf-life, predictive modelling, neural networks, oil oxidation

Procedia PDF Downloads 188
880 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 56