Search results for: back propagation neural network model
18999 Loop Heat Pipe Two-Phase Heat Transports: Guidelines for Technology Utilization
Authors: Triem T. Hoang
Abstract:
Loop heat pipes (LHPs) are two-phase capillary-pumped heat transports. An appropriate working fluid is selected for the intended application temperature range. A closed-loop is evacuated to a high vacuum, back-filled partially with the working fluid, and then hermetically sealed under the fluid own pressure. Heat from a heat source conducts through the evaporator casing to vaporize liquid on the outer surface of the wick structure inside the evaporator. The generated vapor is compelled to vent out of the evaporator and into the vapor line for transport to the condenser assembly. There, heat is removed and rejected to a heat sink to condensed vapor back to liquid. The liquid exits the condenser and travels in the liquid line to return to the evaporator to complete the cycle. The circulation of fluid, and thus the heat transport in the LHP, is accomplished entirely by capillary action. The LHP contains no mechanical moving part to wear out or break down and, therefore possesses, reliability and a long life even without maintenance. In this paper, the author not only attempts to introduce the LHP technology in simplistic terms to those who are not familiar with it but also provides necessary technical information to potential users for the proper design and analysis of the LHP system.Keywords: two-phase heat transfer, loop heat pipe, capillary pumped technology, thermal-fluid modeling
Procedia PDF Downloads 14018998 Sea of Light: A Game 'Based Approach for Evidence-Centered Assessment of Collaborative Problem Solving
Authors: Svenja Pieritz, Jakab Pilaszanovich
Abstract:
Collaborative Problem Solving (CPS) is recognized as being one of the most important skills of the 21st century with having a potential impact on education, job selection, and collaborative systems design. Therefore, CPS has been adopted in several standardized tests, including the Programme for International Student Assessment (PISA) in 2015. A significant challenge of evaluating CPS is the underlying interplay of cognitive and social skills, which requires a more holistic assessment. However, the majority of the existing tests are using a questionnaire-based assessment, which oversimplifies this interplay and undermines ecological validity. Two major difficulties were identified: Firstly, the creation of a controllable, real-time environment allowing natural behaviors and communication between at least two people. Secondly, the development of an appropriate method to collect and synthesize both cognitive and social metrics of collaboration. This paper proposes a more holistic and automated approach to the assessment of CPS. To address these two difficulties, a multiplayer problem-solving game called Sea of Light was developed: An environment allowing students to deploy a variety of measurable collaborative strategies. This controlled environment enables researchers to monitor behavior through the analysis of game actions and chat. The according solution for the statistical model is a combined approach of Natural Language Processing (NLP) and Bayesian network analysis. Social exchanges via the in-game chat are analyzed through NLP and fed into the Bayesian network along with other game actions. This Bayesian network synthesizes evidence to track and update different subdimensions of CPS. Major findings focus on the correlations between the evidences collected through in- game actions, the participants’ chat features and the CPS self- evaluation metrics. These results give an indication of which game mechanics can best describe CPS evaluation. Overall, Sea of Light gives test administrators control over different problem-solving scenarios and difficulties while keeping the student engaged. It enables a more complete assessment based on complex, socio-cognitive information on actions and communication. This tool permits further investigations of the effects of group constellations and personality in collaborative problem-solving.Keywords: bayesian network, collaborative problem solving, game-based assessment, natural language processing
Procedia PDF Downloads 13218997 A Framework for Consumer Selection on Travel Destinations
Authors: J. Rhodes, V. Cheng, P. Lok
Abstract:
The aim of this study is to develop a parsimonious model that explains the effect of different stimulus on a tourist’s intention to visit a new destination. The model consists of destination trust and interest as the mediating variables. The model was tested using two different types of stimulus; both studies empirically supported the proposed model. Furthermore, the first study revealed that advertising has a stronger effect than positive online reviews. The second study found that the peripheral route of the elaboration likelihood model has a stronger influence power than the central route in this context.Keywords: advertising, electronic word-of-mouth, elaboration likelihood model, intention to visit, trust
Procedia PDF Downloads 45818996 A Combined AHP-GP Model for Selecting Knowledge Management Tool
Authors: Ahmad Sarfaraz, Raiyad Herwies
Abstract:
In this paper, a multi-criteria decision making analysis is used to help any organization selects the best KM tool that fits and serves its needs. The AHP model is used based on a previous study to highlight and identify the main criteria and sub-criteria that are incorporated in the selection process. Different KM tools alternatives with different criteria are compared and weighted accurately to be incorporated in the GP model. The main goal is to combine the GP model with the AHP model to ensure that selecting the KM tool considers the resource constraints. Two important issues are discussed in this paper: how different factors could be taken into consideration in forming the AHP model, and how to incorporate the AHP results into the GP model for better results.Keywords: knowledge management, analytical hierarchy process, goal programming, multi-criteria decision making
Procedia PDF Downloads 38518995 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 33018994 Integrated Mass Rapid Transit (MRT) and Bus System in Singapore: MRT Ridership and the Provision of Feeder Bus Services
Authors: Devansh Jain, Shu Ting Goh
Abstract:
With the aim of improving the quality of life of people of Singapore with provision of better transport services, Land and Transport Authority Singapore recently published its Master Plan 2013. The major objectives mentioned in the plan were to make a comprehensive public transport network with better quality Mass Rapid Transit, bus services along with cycling and walking. MRT is the backbone of the transport system in Singapore, and to promote and increase the MRT ridership, good accessibility to access the MRT stations is a necessity. The aim of this paper is to investigate the relationship between MRT ridership and the provision of feeder bus services in Singapore planning areas and also to understand the hub and spoke model adopted by Singapore for provision of transport services. The findings of the study will lead to conclusions made from the Regression model developed by the various factors affecting MRT ridership, and hence will benefit to enhance the services provided by the system.Keywords: quality of life, public transport, mass rapid transit, ridership
Procedia PDF Downloads 24718993 Proposing a Boundary Coverage Algorithm for Underwater Sensor Network
Authors: Seyed Mohsen Jameii
Abstract:
Wireless underwater sensor networks are a type of sensor networks that are located in underwater environments and linked together by acoustic waves. The application of these kinds of network includes monitoring of pollutants (chemical, biological, and nuclear), oil fields detection, prediction of the likelihood of a tsunami in coastal areas, the use of wireless sensor nodes to monitor the passing submarines, and determination of appropriate locations for anchoring ships. This paper proposes a boundary coverage algorithm for intrusion detection in underwater sensor networks. In the first phase of the proposed algorithm, optimal deployment of nodes is done in the water. In the second phase, after the employment of nodes at the proper depth, clustering is executed to reduce the exchanges of messages between the sensors. In the third phase, the algorithm of "divide and conquer" is used to save energy and increase network efficiency. The simulation results demonstrate the efficiency of the proposed algorithm.Keywords: boundary coverage, clustering, divide and conquer, underwater sensor nodes
Procedia PDF Downloads 34118992 Survival Analysis after a First Ischaemic Stroke Event: A Case-Control Study in the Adult Population of England.
Authors: Padma Chutoo, Elena Kulinskaya, Ilyas Bakbergenuly, Nicholas Steel, Dmitri Pchejetski
Abstract:
Stroke is associated with a significant risk of morbidity and mortality. There is scarcity of research on the long-term survival after first-ever ischaemic stroke (IS) events in England with regards to effects of different medical therapies and comorbidities. The objective of this study was to model the all-cause mortality after an IS diagnosis in the adult population of England. Using a retrospective case-control design, we extracted the electronic medical records of patients born prior to or in year 1960 in England with a first-ever ischaemic stroke diagnosis from January 1986 to January 2017 within the Health and Improvement Network (THIN) database. Participants with a history of ischaemic stroke were matched to 3 controls by sex and age at diagnosis and general practice. The primary outcome was the all-cause mortality. The hazards of the all-cause mortality were estimated using a Weibull-Cox survival model which included both scale and shape effects and a shared random effect of general practice. The model included sex, birth cohort, socio-economic status, comorbidities and medical therapies. 20,250 patients with a history of IS (cases) and 55,519 controls were followed up to 30 years. From 2008 to 2015, the one-year all-cause mortality for the IS patients declined with an absolute change of -0.5%. Preventive treatments to cases increased considerably over time. These included prescriptions of statins and antihypertensives. However, prescriptions for antiplatelet drugs decreased in the routine general practice since 2010. The survival model revealed a survival benefit of antiplatelet treatment to stroke survivors with hazard ratio (HR) of 0.92 (0.90 – 0.94). IS diagnosis had significant interactions with gender and age at entry and hypertension diagnosis. IS diagnosis was associated with high risk of all-cause mortality with HR= 3.39 (3.05-3.72) for cases compared to controls. Hypertension was associated with poor survival with HR = 4.79 (4.49 - 5.09) for hypertensive cases relative to non-hypertensive controls, though the detrimental effect of hypertension has not reached significance for hypertensive controls, HR = 1.19(0.82-1.56). This study of English primary care data showed that between 2008 and 2015, the rates of prescriptions of stroke preventive treatments increased, and a short-term all-cause mortality after IS stroke declined. However, stroke resulted in poor long-term survival. Hypertension, a modifiable risk factor, was found to be associated with poor survival outcomes in IS patients. Antiplatelet drugs were found to be protective to survival. Better efforts are required to reduce the burden of stroke through health service development and primary prevention.Keywords: general practice, hazard ratio, health improvement network (THIN), ischaemic stroke, multiple imputation, Weibull-Cox model.
Procedia PDF Downloads 18618991 Research and Implementation of Cross-domain Data Sharing System in Net-centric Environment
Authors: Xiaoqing Wang, Jianjian Zong, Li Li, Yanxing Zheng, Jinrong Tong, Mao Zhan
Abstract:
With the rapid development of network and communication technology, a great deal of data has been generated in different domains of a network. These data show a trend of increasing scale and more complex structure. Therefore, an effective and flexible cross-domain data-sharing system is needed. The Cross-domain Data Sharing System(CDSS) in a net-centric environment is composed of three sub-systems. The data distribution sub-system provides data exchange service through publish-subscribe technology that supports asynchronism and multi-to-multi communication, which adapts to the needs of the dynamic and large-scale distributed computing environment. The access control sub-system adopts Attribute-Based Access Control(ABAC) technology to uniformly model various data attributes such as subject, object, permission and environment, which effectively monitors the activities of users accessing resources and ensures that legitimate users get effective access control rights within a legal time. The cross-domain access security negotiation subsystem automatically determines the access rights between different security domains in the process of interactive disclosure of digital certificates and access control policies through trust policy management and negotiation algorithms, which provides an effective means for cross-domain trust relationship establishment and access control in a distributed environment. The CDSS’s asynchronous,multi-to-multi and loosely-coupled communication features can adapt well to data exchange and sharing in dynamic, distributed and large-scale network environments. Next, we will give CDSS new features to support the mobile computing environment.Keywords: data sharing, cross-domain, data exchange, publish-subscribe
Procedia PDF Downloads 12418990 Comparison of Agree Method and Shortest Path Method for Determining the Flow Direction in Basin Morphometric Analysis: Case Study of Lower Tapi Basin, Western India
Authors: Jaypalsinh Parmar, Pintu Nakrani, Bhaumik Shah
Abstract:
Digital Elevation Model (DEM) is elevation data of the virtual grid on the ground. DEM can be used in application in GIS such as hydrological modelling, flood forecasting, morphometrical analysis and surveying etc.. For morphometrical analysis the stream flow network plays a very important role. DEM lacks accuracy and cannot match field data as it should for accurate results of morphometrical analysis. The present study focuses on comparing the Agree method and the conventional Shortest path method for finding out morphometric parameters in the flat region of the Lower Tapi Basin which is located in the western India. For the present study, open source SRTM (Shuttle Radar Topography Mission with 1 arc resolution) and toposheets issued by Survey of India (SOI) were used to determine the morphometric linear aspect such as stream order, number of stream, stream length, bifurcation ratio, mean stream length, mean bifurcation ratio, stream length ratio, length of overland flow, constant of channel maintenance and aerial aspect such as drainage density, stream frequency, drainage texture, form factor, circularity ratio, elongation ratio, shape factor and relief aspect such as relief ratio, gradient ratio and basin relief for 53 catchments of Lower Tapi Basin. Stream network was digitized from the available toposheets. Agree DEM was created by using the SRTM and stream network from the toposheets. The results obtained were used to demonstrate a comparison between the two methods in the flat areas.Keywords: agree method, morphometric analysis, lower Tapi basin, shortest path method
Procedia PDF Downloads 23918989 AgriFood Model in Ankara Regional Innovation Strategy
Authors: Coskun Serefoglu
Abstract:
The study aims to analyse how a traditional sector such as agri-food could be mobilized through regional innovation strategies. A principal component analysis as well as qualitative information, such as in-depth interviews, focus group and surveys, were employed to find the priority sectors. An agri-food model was developed which includes both a linear model and interactive model. The model consists of two main components, one of which is technological integration and the other one is agricultural extension which is based on Land-grant university approach of U.S. which is not a common practice in Turkey.Keywords: regional innovation strategy, interactive model, agri-food sector, local development, planning, regional development
Procedia PDF Downloads 14918988 Prediction of Embankment Fires at Railway Infrastructure Using Machine Learning, Geospatial Data and VIIRS Remote Sensing Imagery
Authors: Jan-Peter Mund, Christian Kind
Abstract:
In view of the ongoing climate change and global warming, fires along railways in Germany are occurring more frequently, with sometimes massive consequences for railway operations and affected railroad infrastructure. In the absence of systematic studies within the infrastructure network of German Rail, little is known about the causes of such embankment fires. Since a further increase in these hazards is to be expected in the near future, there is a need for a sound knowledge of triggers and drivers for embankment fires as well as methodical knowledge of prediction tools. Two predictable future trends speak for the increasing relevance of the topic: through the intensification of the use of rail for passenger and freight transport (e.g..: doubling of annual passenger numbers by 2030, compared to 2019), there will be more rail traffic and also more maintenance and construction work on the railways. This research project approach uses satellite data to identify historical embankment fires along rail network infrastructure. The team links data from these fires with infrastructure and weather data and trains a machine-learning model with the aim of predicting fire hazards on sections of the track. Companies reflect on the results and use them on a pilot basis in precautionary measures.Keywords: embankment fires, railway maintenance, machine learning, remote sensing, VIIRS data
Procedia PDF Downloads 8918987 Stability Analysis of SEIR Epidemic Model with Treatment Function
Authors: Sasiporn Rattanasupha, Settapat Chinviriyasit
Abstract:
The treatment function adopts a continuous and differentiable function which can describe the effect of delayed treatment when the number of infected individuals increases and the medical condition is limited. In this paper, the SEIR epidemic model with treatment function is studied to investigate the dynamics of the model due to the effect of treatment. It is assumed that the treatment rate is proportional to the number of infective patients. The stability of the model is analyzed. The model is simulated to illustrate the analytical results and to investigate the effects of treatment on the spread of infection.Keywords: basic reproduction number, local stability, SEIR epidemic model, treatment function
Procedia PDF Downloads 52118986 Optimizing Pediatric Pneumonia Diagnosis with Lightweight MobileNetV2 and VAE-GAN Techniques in Chest X-Ray Analysis
Authors: Shriya Shukla, Lachin Fernando
Abstract:
Pneumonia, a leading cause of mortality in young children globally, presents significant diagnostic challenges, particularly in resource-limited settings. This study presents an approach to diagnosing pediatric pneumonia using Chest X-Ray (CXR) images, employing a lightweight MobileNetV2 model enhanced with synthetic data augmentation. Addressing the challenge of dataset scarcity and imbalance, the study used a Variational Autoencoder-Generative Adversarial Network (VAE-GAN) to generate synthetic CXR images, improving the representation of normal cases in the pediatric dataset. This approach not only addresses the issues of data imbalance and scarcity prevalent in medical imaging but also provides a more accessible and reliable diagnostic tool for early pneumonia detection. The augmented data improved the model’s accuracy and generalization, achieving an overall accuracy of 95% in pneumonia detection. These findings highlight the efficacy of the MobileNetV2 model, offering a computationally efficient yet robust solution well-suited for resource-constrained environments such as mobile health applications. This study demonstrates the potential of synthetic data augmentation in enhancing medical image analysis for critical conditions like pediatric pneumonia.Keywords: pneumonia, MobileNetV2, image classification, GAN, VAE, deep learning
Procedia PDF Downloads 12518985 Economic Analysis of Domestic Combined Heat and Power System in the UK
Authors: Thamo Sutharssan, Diogo Montalvao, Wen-Chung Wang, Yong Chen, Claudia Pisac
Abstract:
A combined heat and power (CHP) system is an efficient and clean way to generate power (electricity). Heat produced by the CHP system can be used for water and space heating. The CHP system which uses hydrogen as fuel produces zero carbon emission. Its’ efficiency can reach more than 80% whereas that of a traditional power station can only reach up to 50% because much of the thermal energy is wasted. The other advantages of CHP systems include that they can decentralize energy generation, improve energy security and sustainability, and significantly reduce the energy cost to the users. This paper presents the economic benefits of using a CHP system in the domestic environment. For this analysis, natural gas is considered as potential fuel as the hydrogen fuel cell based CHP systems are rarely used. UK government incentives for CHP systems are also considered as the added benefit. Results show that CHP requires a significant initial investment in return it can reduce the annual energy bill significantly. Results show that an investment may be paid back in 7 years. After the back period, CHP can run for about 3 years as most of the CHP manufacturers provide 10-year warranty.Keywords: combined heat and power, clean energy, hydrogen fuel cell, economic analysis of CHP, zero emission
Procedia PDF Downloads 38518984 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 15518983 Technical and Economic Evaluation of Harmonic Mitigation from Offshore Wind Power Plants by Transmission Owners
Authors: A. Prajapati, K. L. Koo, F. Ghassemi, M. Mulimakwenda
Abstract:
In the UK, as the volume of non-linear loads connected to transmission grid continues to rise steeply, the harmonic distortion levels on transmission network are becoming a serious concern for the network owners and system operators. This paper outlines the findings of the study conducted to verify the proposal that the harmonic mitigation could be optimized and can be managed economically and effectively at the transmission network level by the Transmission Owner (TO) instead of the individual polluter connected to the grid. Harmonic mitigation studies were conducted on selected regions of the transmission network in England for recently connected offshore wind power plants to strategize and optimize selected harmonic filter options. The results – filter volume and capacity – were then compared against the mitigation measures adopted by the individual connections. Estimation ratios were developed based on the actual installed and optimal proposed filters. These estimation ratios were then used to derive harmonic filter requirements for future contracted connections. The study has concluded that a saving of 37% in the filter volume/capacity could be achieved if the TO is to centrally manage the harmonic mitigation instead of individual polluter installing their own mitigation solution.Keywords: C-type filter, harmonics, optimization, offshore wind farms, interconnectors, HVDC, renewable energy, transmission owner
Procedia PDF Downloads 15718982 A Critical Discourse Analysis of Protesters in the Debates of Al Jazeera Channel of the Yemeni Revolution
Authors: Raya Sulaiman
Abstract:
Critical discourse analysis investigates how discourse is used to abuse power relationships. Political debates constitute discourses which mirror aspects of ideologies. The Arab world has been one of the most unsettled zones in the world and has dominated global politics due to the Arab revolutions which started in 2010. This study aimed at uncovering the ideological intentions in the formulation and circulation of hegemonic political ideology in the TV political debates of the 2011 to 2012 Yemen revolution, how ideology was used as a tool of hegemony. The study specifically examined the ideologies associated with the use of protesters as a social actor. Data of the study consisted of four debates (17350 words) from four live debate programs: The Opposite Direction, In Depth, Behind the News and the Revolution Talk that were staged at Al Jazeera TV channel between 2011 and 2012. Data was readily transcribed by Al Jazeera online. Al Jazeera was selected for the study because it is the most popular TV network in the Arab world and has a strong presence, especially during the Arab revolutions. Al Jazeera has also been accused of inciting protests across the Arab region. Two debate sites were identified in the data: government and anti-government. The government side represented the president Ali Abdullah Saleh and his regime while the anti-government side represented the gathering squares who demanded the president to ‘step down’. The study analysed verbal discourse aspects of the debates using critical discourse analysis: aspects from the Social Actor Network model of van Leeuwen. This framework provides a step-by-step analysis model, and analyses discourse from specific grammatical processes into broader semantic issues. It also provides representative findings since it considers discourse as representative and reconstructed in social practice. Study findings indicated that Al Jazeera and the anti-government had similarities in terms of the ideological intentions related to the protesters. Al Jazeera victimized and incited the protesters which were similar to the anti-government. Al Jazeera used assimilation, nominalization, and active role allocation as the linguistic aspects in order to reach its ideological intentions related to the protesters. Government speakers did not share the same ideological intentions with Al Jazeera. Study findings indicated that Al Jazeera had excluded the government from its debates causing a violation to its slogan, the opinion, and the other opinion. This study implies the powerful role of discourse in shaping ideological media intentions and influencing the media audience.Keywords: Al Jazeera network, critical discourse analysis, ideology, Yemeni revolution
Procedia PDF Downloads 22418981 A Different Approach to Smart Phone-Based Wheat Disease Detection System Using Deep Learning for Ethiopia
Authors: Nathenal Thomas Lambamo
Abstract:
Based on the fact that more than 85% of the labor force and 90% of the export earnings are taken by agriculture in Ethiopia and it can be said that it is the backbone of the overall socio-economic activities in the country. Among the cereal crops that the agriculture sector provides for the country, wheat is the third-ranking one preceding teff and maize. In the present day, wheat is in higher demand related to the expansion of industries that use them as the main ingredient for their products. The local supply of wheat for these companies covers only 35 to 40% and the rest 60 to 65% percent is imported on behalf of potential customers that exhaust the country’s foreign currency reserves. The above facts show that the need for this crop in the country is too high and in reverse, the productivity of the crop is very less because of these reasons. Wheat disease is the most devastating disease that contributes a lot to this unbalance in the demand and supply status of the crop. It reduces both the yield and quality of the crop by 27% on average and up to 37% when it is severe. This study aims to detect the most frequent and degrading wheat diseases, Septoria and Leaf rust, using the most efficiently used subset of machine learning technology, deep learning. As a state of the art, a deep learning class classification technique called Convolutional Neural Network (CNN) has been used to detect diseases and has an accuracy of 99.01% is achieved.Keywords: septoria, leaf rust, deep learning, CNN
Procedia PDF Downloads 7618980 The Effect of a Multidisciplinary Spine Clinic on Treatment Rates and Lead Times to Care
Authors: Ishan Naidu, Jessica Ryvlin, Devin Videlefsky
Abstract:
Introduction: Back pain is a leading cause of years lived with disability and economic burden, exceeding over $20 billion in healthcare costs not including indirect costs such as absence from work and caregiving. The multifactorial nature of back pain leads to treatment modalities administered by a variety of specialists, which are often disjointed. Multiple studies have found that patients receiving delayed physical therapy for lower back pain had higher medical-related costs from increased health service utilization as well as a reduced improvement in pain severity compared to early management. Uncoordinated health care delivery can exacerbate the physical and economic toll of the chronic condition, thus improvements in interdisciplinary, shared decision-making may improve outcomes. Objective: To assess whether a multidisciplinary spine clinic (MSC), consisting of orthopedic surgery, neurosurgery, pain medicine, and physiatry, alters interventional and non-interventional planning and treatment compared to a traditional unidisciplinary spine clinic (USC) including only orthopedic surgery. Methods: We conducted a retrospective cohort study with patients initially presenting for spine care to orthopedic surgeons between July 1, 2018 to June 30, 2019. Time to treatment recommendation, time to treatment and rates of treatment recommendations were assessed, including physical therapy, injections and surgery. Treatment rates were compared between MSC and USC using Pearson’s chi-square test logistic regression. Time to treatment recommendation and time to treatment were compared using log-rank test and Cox proportional hazard regression. All analyses were repeated for the propensity score (PS) matched subsample. Results: This study included 1,764 patients, with 692 at MSC and 1,072 at USC. Patients in MSC were more likely to be recommended injection when compared to USC (8.5% vs. 5.4%, p=0.01). When adjusted for confounders, the likelihood of injection recommendation remained greater in MSC than USC (Odds ratio [OR]=2.22, 95% CI: (1.39, 3.53), p=0.001). MSC was also associated with a shorter time to receiving injection recommendation versus USC (median: 21 vs. 32 days, log-rank: p<0.001; hazard ratio [HR]=1.90, 95% CI: (1.25, 2.90), p=0.003). MSC was associated with a higher likelihood of injection treatment (OR=2.27, 95% CI: (1.39, 3.73), p=0.001) and shorter lead time (HR=1.98, 95% CI: (1.27, 3.09), p=0.003). PS-matched analyses yielded similar conclusions. Conclusions: Care delivered at a multidisciplinary spine clinic was associated with a higher likelihood of recommending injection and a shorter lead time to injection administration when compared to a traditional unidisciplinary spine surgery clinic. Multidisciplinary clinics may facilitate coordinated care amongst different specialties resulting in increased utilization of less invasive treatment modalities while also improving care efficiency. The multidisciplinary clinic model is an important advancement in care delivery and communication, which can be used as a powerful method of improving patient outcomes as treatment guidelines evolve.Keywords: coordinated care, epidural steroid injection, multi-disciplinary, non-invasive
Procedia PDF Downloads 14018979 A Vision-Based Early Warning System to Prevent Elephant-Train Collisions
Authors: Shanaka Gunasekara, Maleen Jayasuriya, Nalin Harischandra, Lilantha Samaranayake, Gamini Dissanayake
Abstract:
One serious facet of the worsening Human-Elephant conflict (HEC) in nations such as Sri Lanka involves elephant-train collisions. Endangered Asian elephants are maimed or killed during such accidents, which also often result in orphaned or disabled elephants, contributing to the phenomenon of lone elephants. These lone elephants are found to be more likely to attack villages and showcase aggressive behaviour, which further exacerbates the overall HEC. Furthermore, Railway Services incur significant financial losses and disruptions to services annually due to such accidents. Most elephant-train collisions occur due to a lack of adequate reaction time. This is due to the significant stopping distance requirements of trains, as the full braking force needs to be avoided to minimise the risk of derailment. Thus, poor driver visibility at sharp turns, nighttime operation, and poor weather conditions are often contributing factors to this problem. Initial investigations also indicate that most collisions occur in localised “hotspots” where elephant pathways/corridors intersect with railway tracks that border grazing land and watering holes. Taking these factors into consideration, this work proposes the leveraging of recent developments in Convolutional Neural Network (CNN) technology to detect elephants using an RGB/infrared capable camera around known hotspots along the railway track. The CNN was trained using a curated dataset of elephants collected on field visits to elephant sanctuaries and wildlife parks in Sri Lanka. With this vision-based detection system at its core, a prototype unit of an early warning system was designed and tested. This weatherised and waterproofed unit consists of a Reolink security camera which provides a wide field of view and range, an Nvidia Jetson Xavier computing unit, a rechargeable battery, and a solar panel for self-sufficient functioning. The prototype unit was designed to be a low-cost, low-power and small footprint device that can be mounted on infrastructures such as poles or trees. If an elephant is detected, an early warning message is communicated to the train driver using the GSM network. A mobile app for this purpose was also designed to ensure that the warning is clearly communicated. A centralized control station manages and communicates all information through the train station network to ensure coordination among important stakeholders. Initial results indicate that detection accuracy is sufficient under varying lighting situations, provided comprehensive training datasets that represent a wide range of challenging conditions are available. The overall hardware prototype was shown to be robust and reliable. We envision a network of such units may help contribute to reducing the problem of elephant-train collisions and has the potential to act as an important surveillance mechanism in dealing with the broader issue of human-elephant conflicts.Keywords: computer vision, deep learning, human-elephant conflict, wildlife early warning technology
Procedia PDF Downloads 22618978 Effect of Filler Size and Shape on Positive Temperature Coefficient Effect
Authors: Eric Asare, Jamie Evans, Mark Newton, Emiliano Bilotti
Abstract:
Two types of filler shapes (sphere and flakes) and three different sizes are employed to study the size effect on PTC. The composite is prepared using a mini-extruder with high-density polyethylene (HDPE) as the matrix. A computer modelling is used to fit the experimental results. The percolation threshold decreases with decreasing filler size and this was observed for both the spherical particles as well as the flakes. This was caused by the decrease in interparticle distance with decreasing filler size. The 100 µm particles showed a larger PTC intensity compared to the 5 µm particles for the metal coated glass sphere and flake. The small particles have a large surface area and agglomeration and this makes it difficult for the conductive network to e disturbed. Increasing the filler content decreased the PTC intensity and this is due to an increase in the conductive network within the polymer matrix hence more energy is needed to disrupt the network.Keywords: positive temperature coefficient (PTC) effect, conductive polymer composite (CPC), electrical conductivity
Procedia PDF Downloads 42718977 An Assessment into the Drift in Direction of International Migration of Labor: Changing Aspirations for Religiosity and Cultural Assimilation
Authors: Syed Toqueer Akhter, Rabia Zulfiqar
Abstract:
This paper attempts to trace the determining factor- as far as individual preferences and expectations are concerned- of what causes the direction of international migration to drift in certain ways owing to factors such as Religiosity and Cultural Assimilation. The narrative on migration has graduated from the age long ‘push/pull’ debate to that of complex factors that may vary across each individual. We explore the longstanding factor of religiosity widely acknowledged in mentioned literature as a key variable in the assessment of migration, wherein the impact of religiosity in the form of a drift into the intent of migration has been analyzed. A more conventional factor cultural assimilation is used in a contemporary way to estimate how it plays a role in affecting the drift in direction. In particular what our research aims at achieving is to isolate the effect our key variables: Cultural Assimilation and Religiosity have on direction of migration, and to explore how they interplay as a composite unit- and how we may be able to justify the change in behavior displayed by these key variables. In order to establish a true sense of what drives individual choices we employ the method of survey research and use a questionnaire to conduct primary research. The questionnaire was divided into six sections covering subjects including household characteristics, perceptions and inclinations of the respondents relevant to our study. Religiosity was quantified using a proxy of Migration Network that utilized secondary data to estimate religious hubs in recipient countries. To estimate the relationship between Intent of Migration and its variants three competing econometric models namely: the Ordered Probit Model, the Ordered Logit Model and the Tobit Model were employed. For every model that included our key variables, a highly significant relationship with the intent of migration was estimated.Keywords: international migration, drift in direction, cultural assimilation, religiosity, ordered probit model
Procedia PDF Downloads 30718976 Efficient Residual Road Condition Segmentation Network Based on Reconstructed Images
Authors: Xiang Shijie, Zhou Dong, Tian Dan
Abstract:
This paper focuses on the application of real-time semantic segmentation technology in complex road condition recognition, aiming to address the critical issue of how to improve segmentation accuracy while ensuring real-time performance. Semantic segmentation technology has broad application prospects in fields such as autonomous vehicle navigation and remote sensing image recognition. However, current real-time semantic segmentation networks face significant technical challenges and optimization gaps in balancing speed and accuracy. To tackle this problem, this paper conducts an in-depth study and proposes an innovative Guided Image Reconstruction Module. By resampling high-resolution images into a set of low-resolution images, this module effectively reduces computational complexity, allowing the network to more efficiently extract features within limited resources, thereby improving the performance of real-time segmentation tasks. In addition, a dual-branch network structure is designed in this paper to fully leverage the advantages of different feature layers. A novel Hybrid Attention Mechanism is also introduced, which can dynamically capture multi-scale contextual information and effectively enhance the focus on important features, thus improving the segmentation accuracy of the network in complex road condition. Compared with traditional methods, the proposed model achieves a better balance between accuracy and real-time performance and demonstrates competitive results in road condition segmentation tasks, showcasing its superiority. Experimental results show that this method not only significantly improves segmentation accuracy while maintaining real-time performance, but also remains stable across diverse and complex road conditions, making it highly applicable in practical scenarios. By incorporating the Guided Image Reconstruction Module, dual-branch structure, and Hybrid Attention Mechanism, this paper presents a novel approach to real-time semantic segmentation tasks, which is expected to further advance the development of this field.Keywords: hybrid attention mechanism, image reconstruction, real-time, road status recognition
Procedia PDF Downloads 2318975 Design of Lead-Lag Based Internal Model Controller for Binary Distillation Column
Authors: Rakesh Kumar Mishra, Tarun Kumar Dan
Abstract:
Lead-Lag based Internal Model Control method is proposed based on Internal Model Control (IMC) strategy. In this paper, we have designed the Lead-Lag based Internal Model Control for binary distillation column for SISO process (considering only bottom product). The transfer function has been taken from Wood and Berry model. We have find the composition control and disturbance rejection using Lead-Lag based IMC and comparing with the response of simple Internal Model Controller.Keywords: SISO, lead-lag, internal model control, wood and berry, distillation column
Procedia PDF Downloads 64618974 Neural Networks Based Prediction of Long Term Rainfall: Nine Pilot Study Zones over the Mediterranean Basin
Authors: Racha El Kadiri, Mohamed Sultan, Henrique Momm, Zachary Blair, Rachel Schultz, Tamer Al-Bayoumi
Abstract:
The Mediterranean Basin is a very diverse region of nationalities and climate zones, with a strong dependence on agricultural activities. Predicting long term (with a lead of 1 to 12 months) rainfall, and future droughts could contribute in a sustainable management of water resources and economical activities. In this study, an integrated approach was adopted to construct predictive tools with lead times of 0 to 12 months to forecast rainfall amounts over nine subzones of the Mediterranean Basin region. The following steps were conducted: (1) acquire, assess and intercorrelate temporal remote sensing-based rainfall products (e.g. The CPC Merged Analysis of Precipitation [CMAP]) throughout the investigation period (1979 to 2016), (2) acquire and assess monthly values for all of the climatic indices influencing the regional and global climatic patterns (e.g., Northern Atlantic Oscillation [NOI], Southern Oscillation Index [SOI], and Tropical North Atlantic Index [TNA]); (3) delineate homogenous climatic regions and select nine pilot study zones, (4) apply data mining methods (e.g. neural networks, principal component analyses) to extract relationships between the observed rainfall and the controlling factors (i.e. climatic indices with multiple lead-time periods) and (5) use the constructed predictive tools to forecast monthly rainfall and dry and wet periods. Preliminary results indicate that rainfall and dry/wet periods were successfully predicted with lead zones of 0 to 12 months using the adopted methodology, and that the approach is more accurately applicable in the southern Mediterranean region.Keywords: rainfall, neural networks, climatic indices, Mediterranean
Procedia PDF Downloads 31218973 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 13618972 Theoretical Approach to Kinetics of Transient Plasticity of Metals under Irradiation
Authors: Pavlo Selyshchev, Tetiana Didenko
Abstract:
Within the framework of the obstacle radiation hardening and the dislocation climb-glide model a theoretical approach is developed to describe peculiarities of transient plasticity of metal under irradiation. It is considered nonlinear dynamics of accumulation of point defects (vacancies and interstitial atoms). We consider metal under such stress and conditions of irradiation at which creep is determined by dislocation motion: dislocations climb obstacles and glide between obstacles. It is shown that the rivalry between vacancy and interstitial fluxes to dislocation leads to fractures of plasticity time dependence. Simulation and analysis of this phenomenon are performed. Qualitatively different regimes of transient plasticity under irradiation are found. The fracture time is obtained. The theoretical results are compared with the experimental ones.Keywords: climb and glide of dislocations, fractures of transient plasticity, irradiation, non-linear feed-back, point defects
Procedia PDF Downloads 20218971 A Framework for Security Risk Level Measures Using CVSS for Vulnerability Categories
Authors: Umesh Kumar Singh, Chanchala Joshi
Abstract:
With increasing dependency on IT infrastructure, the main objective of a system administrator is to maintain a stable and secure network, with ensuring that the network is robust enough against malicious network users like attackers and intruders. Security risk management provides a way to manage the growing threats to infrastructures or system. This paper proposes a framework for risk level estimation which uses vulnerability database National Institute of Standards and Technology (NIST) National Vulnerability Database (NVD) and the Common Vulnerability Scoring System (CVSS). The proposed framework measures the frequency of vulnerability exploitation; converges this measured frequency with standard CVSS score and estimates the security risk level which helps in automated and reasonable security management. In this paper equation for the Temporal score calculation with respect to availability of remediation plan is derived and further, frequency of exploitation is calculated with determined temporal score. The frequency of exploitation along with CVSS score is used to calculate the security risk level of the system. The proposed framework uses the CVSS vectors for risk level estimation and measures the security level of specific network environment, which assists system administrator for assessment of security risks and making decision related to mitigation of security risks.Keywords: CVSS score, risk level, security measurement, vulnerability category
Procedia PDF Downloads 32118970 High Fidelity Interactive Video Segmentation Using Tensor Decomposition, Boundary Loss, Convolutional Tessellations, and Context-Aware Skip Connections
Authors: Anthony D. Rhodes, Manan Goel
Abstract:
We provide a high fidelity deep learning algorithm (HyperSeg) for interactive video segmentation tasks using a dense convolutional network with context-aware skip connections and compressed, 'hypercolumn' image features combined with a convolutional tessellation procedure. In order to maintain high output fidelity, our model crucially processes and renders all image features in high resolution, without utilizing downsampling or pooling procedures. We maintain this consistent, high grade fidelity efficiently in our model chiefly through two means: (1) we use a statistically-principled, tensor decomposition procedure to modulate the number of hypercolumn features and (2) we render these features in their native resolution using a convolutional tessellation technique. For improved pixel-level segmentation results, we introduce a boundary loss function; for improved temporal coherence in video data, we include temporal image information in our model. Through experiments, we demonstrate the improved accuracy of our model against baseline models for interactive segmentation tasks using high resolution video data. We also introduce a benchmark video segmentation dataset, the VFX Segmentation Dataset, which contains over 27,046 high resolution video frames, including green screen and various composited scenes with corresponding, hand-crafted, pixel-level segmentations. Our work presents a improves state of the art segmentation fidelity with high resolution data and can be used across a broad range of application domains, including VFX pipelines and medical imaging disciplines.Keywords: computer vision, object segmentation, interactive segmentation, model compression
Procedia PDF Downloads 120