Search results for: monitoring network optimization
1364 Linking Adaptation to Climate Change and Sustainable Development: The Case of ClimAdaPT.Local in Portugal
Authors: A. F. Alves, L. Schmidt, J. Ferrao
Abstract:
Portugal is one of the more vulnerable European countries to the impacts of climate change. These include: temperature increase; coastal sea level rise; desertification and drought in the countryside; and frequent and intense extreme weather events. Hence, adaptation strategies to climate change are of great importance. This is what was addressed by ClimAdaPT.Local. This policy-oriented project had the main goal of developing 26 Municipal Adaptation Strategies for Climate Change, through the identification of local specific present and future vulnerabilities, the training of municipal officials, and the engagement of local communities. It is intended to be replicated throughout the whole territory and to stimulate the creation of a national network of local adaptation in Portugal. Supported by methodologies and tools specifically developed for this project, our paper is based on the surveys, training and stakeholder engagement workshops implemented at municipal level. In an 'adaptation-as-learning' process, these tools functioned as a social-learning platform and an exercise in knowledge and policy co-production. The results allowed us to explore the nature of local vulnerabilities and the exposure of gaps in the context of reappraisal of both future climate change adaptation opportunities and possible dysfunctionalities in the governance arrangements of municipal Portugal. Development issues are highlighted when we address the sectors and social groups that are both more sensitive and more vulnerable to the impacts of climate change. We argue that a pluralistic dialogue and a common framing can be established between them, with great potential for transformational adaptation. Observed climate change, present-day climate variability and future expectations of change are great societal challenges which should be understood in the context of the sustainable development agenda.Keywords: adaptation, ClimAdaPT.Local, climate change, Portugal, sustainable development
Procedia PDF Downloads 1971363 Technological Affordances of a Mobile Fitness Application- A Role of Escapism and Social Outcome Expectation
Authors: Inje Cho
Abstract:
The leading health risks threatening the world today are associated with a modern lifestyle characterized by sedentary behavior, stress, anxiety, and an obesogenic food environment. To counter this alarming trend, the Centers for Disease Control and Prevention have proffered Physical Activity guidelines to bolster physical engagement. Concurrently, the burgeon of smartphones and mobile applications has witnessed a proliferation of fitness applications aimed at invigorating exercise adherence and real-time activity monitoring. Grounded in the Uses and gratification theory, this study delves into the technological affordances of mobile fitness applications, discerning the mediating influences of escapism and social outcome expectations on attitudes and exercise intention. The theory explains how individuals employ distinct communication mediums to satiate their exigencies and desires. Technological affordances manifest as attributes of emerging technologies that galvanize personal engagement in physical activities. Several features of mobile fitness applications include affordances for goal setting, virtual rewards, peer support, and exercise information. Escapism, denoting the inclination to disengage from normal routines, has emerged as a salient motivator for the consumption of new media. This study postulates that individual’s perceptions technological affordances within mobile fitness applications, can affect escapism and social outcome expectations, potentially influencing attitude, and behavior formation. Thus, the integrated model has been developed to empirically examine the interrelationships between technological affordances, escapism, social outcome expectations, and exercise intention. Structural Equation Modelling serves as the methodological tool, and a cohort of 400 Fitbit users shall be enlisted from the Prolific, data collection platform. A sequence of multivariate data analyses will scrutinize both the measurement and hypothesized structural models. By delving into the effects of mobile fitness applications, this study contributes to the growing of new media studies in sport management. Moreover, the novel integration of the uses and gratification theory, technological affordances, via the prism of escapism, illustrates the dynamics that underlies mobile fitness user’s attitudes and behavioral intentions. Therefore, the findings from this study contribute to theoretical understanding and provide pragmatic insights to developers and practitioners in optimizing the impact of mobile fitness applications.Keywords: technological affordances, uses and gratification, mobile fitness apps, escapism, physical activity
Procedia PDF Downloads 801362 Dry Modifications of PCL/Chitosan/PCL Tissue Scaffolds
Authors: Ozan Ozkan, Hilal Turkoglu Sasmazel
Abstract:
Natural polymers are widely used in tissue engineering applications, because of their biocompatibility, biodegradability and solubility in the physiological medium. On the other hand, synthetic polymers are also widely utilized in tissue engineering applications, because they carry no risk of infectious diseases and do not cause immune system reaction. However, the disadvantages of both polymer types block their individual usages as tissue scaffolds efficiently. Therefore, the idea of usage of natural and synthetic polymers together as a single 3D hybrid scaffold which has the advantages of both and the disadvantages of none has been entered to the literature. On the other hand, even though these hybrid structures support the cell adhesion and/or proliferation, various surface modification techniques applied to the surfaces of them to create topographical changes on the surfaces and to obtain reactive functional groups required for the immobilization of biomolecules, especially on the surfaces of synthetic polymers in order to improve cell adhesion and proliferation. In a study presented here, to improve the surface functionality and topography of the layer by layer electrospun 3D poly-epsilon-caprolactone/chitosan/poly-epsilon-caprolactone hybrid tissue scaffolds by using atmospheric pressure plasma method, thus to improve cell adhesion and proliferation of these tissue scaffolds were aimed. The formation/creation of the functional hydroxyl and amine groups and topographical changes on the surfaces of scaffolds were realized by using two different atmospheric pressure plasma systems (nozzle type and dielectric barrier discharge (DBD) type) carried out under different gas medium (air, Ar+O2, Ar+N2). The plasma modification time and distance for the nozzle type plasma system as well as the plasma modification time and the gas flow rate for DBD type plasma system were optimized with monitoring the changes in surface hydrophilicity by using contact angle measurements. The topographical and chemical characterizations of these modified biomaterials’ surfaces were carried out with SEM and ESCA, respectively. The results showed that the atmospheric pressure plasma modifications carried out with both nozzle type plasma and DBD plasma caused topographical and functionality changes on the surfaces of the layer by layer electrospun tissue scaffolds. However, the shelf life studies indicated that the hydrophilicity introduced to the surfaces was mainly because of the functionality changes. Therefore, according to the optimized results, samples treated with nozzle type air plasma modification applied for 9 minutes from a distance of 17 cm and Ar+O2 DBD plasma modification applied for 1 minute under 70 cm3/min O2 flow rate were found to have the highest hydrophilicity compared to pristine samples.Keywords: biomaterial, chitosan, hybrid, plasma
Procedia PDF Downloads 2761361 Optimization of Chitosan Membrane Production Parameters for Zinc Ion Adsorption
Authors: Peter O. Osifo, Hein W. J. P. Neomagus, Hein V. D. Merwe
Abstract:
Chitosan materials from different sources of raw materials were characterized in order to determine optimal preparation conditions and parameters for membrane production. The membrane parameters such as molecular weight, viscosity, and degree of deacetylation were used to evaluate the membrane performance for zinc ion adsorption. The molecular weight of the chitosan was found to influence the viscosity of the chitosan/acetic acid solution. An increase in molecular weight (60000-400000 kg.kmol-1) of the chitosan resulted in a higher viscosity (0.05-0.65 Pa.s) of the chitosan/acetic acid solution. The effect of the degree of deacetylation on the viscosity is not significant. The effect of the membrane production parameters (chitosan- and acetic acid concentration) on the viscosity is mainly determined by the chitosan concentration. For higher chitosan concentrations, a membrane with a better adsorption capacity was obtained. The membrane adsorption capacity increases from 20-130 mg Zn per gram of wet membrane for an increase in chitosan concentration from 2-7 mass %. Chitosan concentrations below 2 and above 7.5 mass % produced membranes that lack good mechanical properties. The optimum manufacturing conditions including chitosan concentration, acetic acid concentration, sodium hydroxide concentration and crosslinking for chitosan membranes within the workable range were defined by the criteria of adsorption capacity and flux. The adsorption increases (50-120 mg.g-1) as the acetic acid concentration increases (1-7 mass %). The sodium hydroxide concentration seems not to have a large effect on the adsorption characteristics of the membrane however, a maximum was reached at a concentration of 5 mass %. The adsorption capacity per gram of wet membrane strongly increases with the chitosan concentration in the acetic acid solution but remains constant per gram of dry chitosan. The optimum solution for membrane production consists of 7 mass % chitosan and 4 mass % acetic acid in de-ionised water. The sodium hydroxide concentration for phase inversion is at optimum at 5 mass %. The optimum cross-linking time was determined to be 6 hours (Percentage crosslinking of 18%). As the cross-linking time increases the adsorption of the zinc decreases (150-50 mg.g-1) in the time range of 0 to 12 hours. After a crosslinking time of 12 hours, the adsorption capacity remains constant. This trend is comparable to the effect on flux through the membrane. The flux decreases (10-3 L.m-2.hr-1) with an increase in crosslinking time range of 0 to 12 hours and reaches a constant minimum after 12 hours.Keywords: chitosan, membrane, waste water, heavy metal ions, adsorption
Procedia PDF Downloads 3871360 Comparative Analysis of Data Gathering Protocols with Multiple Mobile Elements for Wireless Sensor Network
Authors: Bhat Geetalaxmi Jairam, D. V. Ashoka
Abstract:
Wireless Sensor Networks are used in many applications to collect sensed data from different sources. Sensed data has to be delivered through sensors wireless interface using multi-hop communication towards the sink. The data collection in wireless sensor networks consumes energy. Energy consumption is the major constraints in WSN .Reducing the energy consumption while increasing the amount of generated data is a great challenge. In this paper, we have implemented two data gathering protocols with multiple mobile sinks/elements to collect data from sensor nodes. First, is Energy-Efficient Data Gathering with Tour Length-Constrained Mobile Elements in Wireless Sensor Networks (EEDG), in which mobile sinks uses vehicle routing protocol to collect data. Second is An Intelligent Agent-based Routing Structure for Mobile Sinks in WSNs (IAR), in which mobile sinks uses prim’s algorithm to collect data. Authors have implemented concepts which are common to both protocols like deployment of mobile sinks, generating visiting schedule, collecting data from the cluster member. Authors have compared the performance of both protocols by taking statistics based on performance parameters like Delay, Packet Drop, Packet Delivery Ratio, Energy Available, Control Overhead. Authors have concluded this paper by proving EEDG is more efficient than IAR protocol but with few limitations which include unaddressed issues likes Redundancy removal, Idle listening, Mobile Sink’s pause/wait state at the node. In future work, we plan to concentrate more on these limitations to avail a new energy efficient protocol which will help in improving the life time of the WSN.Keywords: aggregation, consumption, data gathering, efficiency
Procedia PDF Downloads 4971359 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 1151358 The Effects of Geographical and Functional Diversity of Collaborators on Quality of Knowledge Generated
Authors: Ajay Das, Sandip Basu
Abstract:
Introduction: There is increasing recognition that diverse streams of knowledge can often be recombined in novel ways to generate new knowledge. However, knowledge recombination theory has not been applied to examine the effects of collaborator diversity on the quality of knowledge such collaborators produce. This is surprising because one would expect that a collaborative team with certain aspects of diversity should be able to recombine process elements related to knowledge development, which are relatively tacit, but also complementary because of the collaborator’s varying backgrounds. Theory and Hypotheses: We propose to examine two aspects of diversity in the environments of collaborative teams to try and capture such potential recombinations of relatively tacit, process knowledge. The first aspect of diversity in team members’ environments is geographical. Collaborators with more geographical distance between them (perhaps working in different countries) often have more autonomy in the processes they adopt for knowledge development. In the absence of overt monitoring, such collaborators are likely to adopt differing approaches to knowledge development. The sharing of such varying approaches among collaborators is likely to result in greater quality of the common collaborative pursuit. The second aspect is diversity in the work backgrounds of team members. Such diversity can also increase the potential for knowledge recombination. For example, if one or more members are from a manufacturing center (versus all of them being from a purely R&D center), such members will provide unique perspectives on the implementation of innovative ideas. Again, knowledge that has been evaluated from these diverse perspectives is likely to be of a higher quality. In addition to the above aspects of environmental diversity among team members, we also plan to examine the extent to which individual collaborators are in different environments from the primary innovation center of their employing firms. Proposed Methods: We will test our model on a sample of firms in the semiconductor industry. Our level of analysis will be individual patents generated by these firms and the teams involved in the generation of these. Information on manufacturing activities of our sample firms will be obtained from SEMI, a proprietary database of the semiconductor industry, as well as company 10-K reports. Conclusion: We believe that our results will represent a preliminary attempt to understand how various forms of diversity in collaborative teams impact the knowledge development process. Our dependent variable of knowledge quality is important to study since higher values of this variable can not only drive firm performance but the broader development of regions and societies through spillover impacts on future innovation. The results of this study will, therefore, inform future research and practice in innovation, geographical location, and vertical integration.Keywords: innovation, manufacturing strategy, knowledge, diversity
Procedia PDF Downloads 3521357 Building the Professional Readiness of Graduates from Day One: An Empirical Approach to Curriculum Continuous Improvement
Authors: Fiona Wahr, Sitalakshmi Venkatraman
Abstract:
Industry employers require new graduates to bring with them a range of knowledge, skills and abilities which mean these new employees can immediately make valuable work contributions. These will be a combination of discipline and professional knowledge, skills and abilities which give graduates the technical capabilities to solve practical problems whilst interacting with a range of stakeholders. Underpinning the development of these disciplines and professional knowledge, skills and abilities, are “enabling” knowledge, skills and abilities which assist students to engage in learning. These are academic and learning skills which are essential to common starting points for both the learning process of students entering the course as well as forming the foundation for the fully developed graduate knowledge, skills and abilities. This paper reports on a project created to introduce and strengthen these enabling skills into the first semester of a Bachelor of Information Technology degree in an Australian polytechnic. The project uses an action research approach in the context of ongoing continuous improvement for the course to enhance the overall learning experience, learning sequencing, graduate outcomes, and most importantly, in the first semester, student engagement and retention. The focus of this is implementing the new curriculum in first semester subjects of the course with the aim of developing the “enabling” learning skills, such as literacy, research and numeracy based knowledge, skills and abilities (KSAs). The approach used for the introduction and embedding of these KSAs, (as both enablers of learning and to underpin graduate attribute development), is presented. Building on previous publications which reported different aspects of this longitudinal study, this paper recaps on the rationale for the curriculum redevelopment and then presents the quantitative findings of entering students’ reading literacy and numeracy knowledge and skills degree as well as their perceived research ability. The paper presents the methodology and findings for this stage of the research. Overall, the cohort exhibits mixed KSA levels in these areas, with a relatively low aggregated score. In addition, the paper describes the considerations for adjusting the design and delivery of the new subjects with a targeted learning experience, in response to the feedback gained through continuous monitoring. Such a strategy is aimed at accommodating the changing learning needs of the students and serves to support them towards achieving the enabling learning goals starting from day one of their higher education studies.Keywords: enabling skills, student retention, embedded learning support, continuous improvement
Procedia PDF Downloads 2481356 Bionaut™: A Minimally Invasive Microsurgical Platform to Treat Non-Communicating Hydrocephalus in Dandy-Walker Malformation
Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher
Abstract:
The Dandy-Walker malformation (DWM) represents a clinical syndrome manifesting as a combination of posterior fossa cyst, hypoplasia of the cerebellar vermis, and obstructive hydrocephalus. Anatomic hallmarks include hypoplasia of the cerebellar vermis, enlargement of the posterior fossa, and cystic dilatation of the fourth ventricle. Current treatments of DWM, including shunting of the cerebral spinal fluid ventricular system and endoscopic third ventriculostomy (ETV), are frequently clinically insufficient, require additional surgical interventions, and carry risks of infections and neurological deficits. Bionaut Labs develops an alternative way to treat Dandy-Walker Malformation (DWM) associated with non-communicating hydrocephalus. We utilize our discreet microsurgical Bionaut™ particles that are controlled externally and remotely to perform safe, accurate, effective fenestration of the Dandy-Walker cyst, specifically in the posterior fossa of the brain, to directly normalize intracranial pressure. Bionaut™ allows for complex non-linear trajectories not feasible by any conventional surgical techniques. The microsurgical particle safely reaches targets in the lower occipital section of the brain. Bionaut™ offers a minimally invasive surgical alternative to highly involved posterior craniotomy or shunts via direct fenestration of the fourth ventricular cyst at the locus defined by the individual anatomy. Our approach offers significant advantages over the current standards of care in patients exhibiting anatomical challenge(s) as a manifestation of DWM, and therefore, is intended to replace conventional therapeutic strategies. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.Keywords: Bionaut™, cerebral spinal fluid, CSF, cyst, Dandy-Walker, fenestration, hydrocephalus, micro-robot
Procedia PDF Downloads 2211355 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection
Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine
Abstract:
Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine
Procedia PDF Downloads 2681354 Cognitive Science Based Scheduling in Grid Environment
Authors: N. D. Iswarya, M. A. Maluk Mohamed, N. Vijaya
Abstract:
Grid is infrastructure that allows the deployment of distributed data in large size from multiple locations to reach a common goal. Scheduling data intensive applications becomes challenging as the size of data sets are very huge in size. Only two solutions exist in order to tackle this challenging issue. First, computation which requires huge data sets to be processed can be transferred to the data site. Second, the required data sets can be transferred to the computation site. In the former scenario, the computation cannot be transferred since the servers are storage/data servers with little or no computational capability. Hence, the second scenario can be considered for further exploration. During scheduling, transferring huge data sets from one site to another site requires more network bandwidth. In order to mitigate this issue, this work focuses on incorporating cognitive science in scheduling. Cognitive Science is the study of human brain and its related activities. Current researches are mainly focused on to incorporate cognitive science in various computational modeling techniques. In this work, the problem solving approach of human brain is studied and incorporated during the data intensive scheduling in grid environments. Here, a cognitive engine is designed and deployed in various grid sites. The intelligent agents present in CE will help in analyzing the request and creating the knowledge base. Depending upon the link capacity, decision will be taken whether to transfer data sets or to partition the data sets. Prediction of next request is made by the agents to serve the requesting site with data sets in advance. This will reduce the data availability time and data transfer time. Replica catalog and Meta data catalog created by the agents assist in decision making process.Keywords: data grid, grid workflow scheduling, cognitive artificial intelligence
Procedia PDF Downloads 3941353 Security Report Profiling for Mobile Banking Applications in Indonesia Based on OWASP Mobile Top 10-2016
Authors: Bambang Novianto, Rizal Aditya Herdianto, Raphael Bianco Huwae, Afifah, Alfonso Brolin Sihite, Rudi Lumanto
Abstract:
The mobile banking application is a type of mobile application that is growing rapidly. This is caused by the ease of service and time savings in making transactions. On the other hand, this certainly provides a challenge in security issues. The use of mobile banking can not be separated from cyberattacks that may occur which can result the theft of sensitive information or financial loss. The financial loss and the theft of sensitive information is the most avoided thing because besides harming the user, it can also cause a loss of customer trust in a bank. Cyberattacks that are often carried out against mobile applications are phishing, hacking, theft, misuse of data, etc. Cyberattack can occur when a vulnerability is successfully exploited. OWASP mobile Top 10 has recorded as many as 10 vulnerabilities that are most commonly found in mobile applications. In the others, android permissions also have the potential to cause vulnerabilities. Therefore, an overview of the profile of the mobile banking application becomes an urgency that needs to be known. So that it is expected to be a consideration of the parties involved for improving security. In this study, an experiment has been conducted to capture the profile of the mobile banking applications in Indonesia based on android permission and OWASP mobile top 10 2016. The results show that there are six basic vulnerabilities based on OWASP Mobile Top 10 that are most commonly found in mobile banking applications in Indonesia, i.e. M1:Improper Platform Usage, M2:Insecure Data Storage, M3:Insecure Communication, M5:Insufficient Cryptography, M7:Client Code Quality, and M9:Reverse Engineering. The most permitted android permissions are the internet, status network access, and telephone read status.Keywords: mobile banking application, OWASP mobile top 10 2016, android permission, sensitive information, financial loss
Procedia PDF Downloads 1411352 Linking Soil Spectral Behavior and Moisture Content for Soil Moisture Content Retrieval at Field Scale
Authors: Yonwaba Atyosi, Moses Cho, Abel Ramoelo, Nobuhle Majozi, Cecilia Masemola, Yoliswa Mkhize
Abstract:
Spectroscopy has been widely used to understand the hyperspectral remote sensing of soils. Accurate and efficient measurement of soil moisture is essential for precision agriculture. The aim of this study was to understand the spectral behavior of soil at different soil water content levels and identify the significant spectral bands for soil moisture content retrieval at field-scale. The study consisted of 60 soil samples from a maize farm, divided into four different treatments representing different moisture levels. Spectral signatures were measured for each sample in laboratory under artificial light using an Analytical Spectral Device (ASD) spectrometer, covering a wavelength range from 350 nm to 2500 nm, with a spectral resolution of 1 nm. The results showed that the absorption features at 1450 nm, 1900 nm, and 2200 nm were particularly sensitive to soil moisture content and exhibited strong correlations with the water content levels. Continuum removal was developed in the R programming language to enhance the absorption features of soil moisture and to precisely understand its spectral behavior at different water content levels. Statistical analysis using partial least squares regression (PLSR) models were performed to quantify the correlation between the spectral bands and soil moisture content. This study provides insights into the spectral behavior of soil at different water content levels and identifies the significant spectral bands for soil moisture content retrieval. The findings highlight the potential of spectroscopy for non-destructive and rapid soil moisture measurement, which can be applied to various fields such as precision agriculture, hydrology, and environmental monitoring. However, it is important to note that the spectral behavior of soil can be influenced by various factors such as soil type, texture, and organic matter content, and caution should be taken when applying the results to other soil systems. The results of this study showed a good agreement between measured and predicted values of Soil Moisture Content with high R2 and low root mean square error (RMSE) values. Model validation using independent data was satisfactory for all the studied soil samples. The results has significant implications for developing high-resolution and precise field-scale soil moisture retrieval models. These models can be used to understand the spatial and temporal variation of soil moisture content in agricultural fields, which is essential for managing irrigation and optimizing crop yield.Keywords: soil moisture content retrieval, precision agriculture, continuum removal, remote sensing, machine learning, spectroscopy
Procedia PDF Downloads 1001351 Urine Neutrophil Gelatinase-Associated Lipocalin as an Early Marker of Acute Kidney Injury in Hematopoietic Stem Cell Transplantation Patients
Authors: Sara Ataei, Maryam Taghizadeh-Ghehi, Amir Sarayani, Asieh Ashouri, Amirhossein Moslehi, Molouk Hadjibabaie, Kheirollah Gholami
Abstract:
Background: Acute kidney injury (AKI) is common in hematopoietic stem cell transplantation (HSCT) patients with an incidence of 21–73%. Prevention and early diagnosis reduces the frequency and severity of this complication. Predictive biomarkers are of major importance to timely diagnosis. Neutrophil gelatinase associated lipocalin (NGAL) is a widely investigated novel biomarker for early diagnosis of AKI. However, no study assessed NGAL for AKI diagnosis in HSCT patients. Methods: We performed further analyses on gathered data from our recent trial to evaluate the performance of urine NGAL (uNGAL) as an indicator of AKI in 72 allogeneic HSCT patients. AKI diagnosis and severity were assessed using Risk–Injury–Failure–Loss–End-stage renal disease and AKI Network criteria. We assessed uNGAL on days -6, -3, +3, +9 and +15. Results: Time-dependent Cox regression analysis revealed a statistically significant relationship between uNGAL and AKI occurrence. (HR=1.04 (1.008-1.07), P=0.01). There was a relation between uNGAL day +9 to baseline ratio and incidence of AKI (unadjusted HR=.1.047(1.012-1.083), P<0.01). The area under the receiver-operating characteristic curve for day +9 to baseline ratio was 0.86 (0.74-0.99, P<0.01) and a cut-off value of 2.62 was 85% sensitive and 83% specific in predicting AKI. Conclusions: Our results indicated that increase in uNGAL augmented the risk of AKI and the changes of day +9 uNGAL concentrations from baseline could be of value for predicting AKI in HSCT patients. Additionally uNGAL changes preceded serum creatinine rises by nearly 2 days.Keywords: acute kidney injury, hemtopoietic stem cell transplantation, neutrophil gelatinase-associated lipocalin, Receiver-operating characteristic curve
Procedia PDF Downloads 4091350 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival
Procedia PDF Downloads 3351349 Discourse Analysis: Where Cognition Meets Communication
Authors: Iryna Biskub
Abstract:
The interdisciplinary approach to modern linguistic studies is exemplified by the merge of various research methods, which sometimes causes complications related to the verification of the research results. This methodological confusion can be resolved by means of creating new techniques of linguistic analysis combining several scientific paradigms. Modern linguistics has developed really productive and efficient methods for the investigation of cognitive and communicative phenomena of which language is the central issue. In the field of discourse studies, one of the best examples of research methods is the method of Critical Discourse Analysis (CDA). CDA can be viewed both as a method of investigation, as well as a critical multidisciplinary perspective. In CDA the position of the scholar is crucial from the point of view exemplifying his or her social and political convictions. The generally accepted approach to obtaining scientifically reliable results is to use a special well-defined scientific method for researching special types of language phenomena: cognitive methods applied to the exploration of cognitive aspects of language, whereas communicative methods are thought to be relevant only for the investigation of communicative nature of language. In the recent decades discourse as a sociocultural phenomenon has been the focus of careful linguistic research. The very concept of discourse represents an integral unity of cognitive and communicative aspects of human verbal activity. Since a human being is never able to discriminate between cognitive and communicative planes of discourse communication, it doesn’t make much sense to apply cognitive and communicative methods of research taken in isolation. It is possible to modify the classical CDA procedure by means of mapping human cognitive procedures onto the strategic communicative planning of discourse communication. The analysis of the electronic petition 'Block Donald J Trump from UK entry. The signatories believe Donald J Trump should be banned from UK entry' (584, 459 signatures) and the parliamentary debates on it has demonstrated the ability to map cognitive and communicative levels in the following way: the strategy of discourse modeling (communicative level) overlaps with the extraction of semantic macrostructures (cognitive level); the strategy of discourse management overlaps with the analysis of local meanings in discourse communication; the strategy of cognitive monitoring of the discourse overlaps with the formation of attitudes and ideologies at the cognitive level. Thus, the experimental data have shown that it is possible to develop a new complex methodology of discourse analysis, where cognition would meet communication, both metaphorically and literally. The same approach may appear to be productive for the creation of computational models of human-computer interaction, where the automatic generation of a particular type of a discourse could be based on the rules of strategic planning involving cognitive models of CDA.Keywords: cognition, communication, discourse, strategy
Procedia PDF Downloads 2541348 Investigate the Rural Mobility and Accessibility Challenges of Seniors
Authors: Tom Ryan
Abstract:
This paper investigates the rural mobility and accessibility challenges of a specific target group - Seniors. The target group is those over 66 years of age who are entitled to use the Public Transport (PT) Free Travel Scheme in rural Ireland. The paper explores at a high level some of the projected rural PT challenges and requirements over the next 10-15 years, noting that statistical predictions show that there will be a significant population demographic shift within the Senior's age profile. Using the PESTEL framework, the literature review explored existing research concerning mobility, accessibility challenges, and the opportunities Seniors face. Twenty-seven qualitative in-depth interviews with stakeholders within the ecosystem were undertaken. The stakeholders included: rural PT customers, Local-Link managers, NTA senior management, a Minister of State, and a European parliament policymaker. Tier 1 interviewee feedback spotlights that the PT network system does not exist for rural patients to access hospital facilities. There was no evidence from the Tier 2 research findings to show that health policymakers and transport planners are working to deliver a national solution to support patients getting access to hospital appointments. Several research interviewees discussed the theme of isolation and the perceived stigma of senior males utilising PT. The findings indicated that MaaS is potentially revolutionary in the PT arena. Finally, this paper suggests several short-, medium- and long-term recommendations based on the research findings. These recommendations are a potential springboard to ensure that rural PT is suitable for future Irish generations.Keywords: accessibility, active ageing, car dependence, isolation, seniors health issues, behavioural changes, environmental challenges, internet of things, demand responsive, mobility as a service
Procedia PDF Downloads 1091347 Spatial and Temporal Analysis of Forest Cover Change with Special Reference to Anthropogenic Activities in Kullu Valley, North-Western Indian Himalayan Region
Authors: Krisala Joshi, Sayanta Ghosh, Renu Lata, Jagdish C. Kuniyal
Abstract:
Throughout the world, monitoring and estimating the changing pattern of forests across diverse landscapes through remote sensing is instrumental in understanding the interactions of human activities and the ecological environment with the changing climate. Forest change detection using satellite imageries has emerged as an important means to gather information on a regional scale. Kullu valley in Himachal Pradesh, India is situated in a transitional zone between the lesser and the greater Himalayas. Thus, it presents a typical rugged mountainous terrain with moderate to high altitude which varies from 1200 meters to over 6000 meters. Due to changes in agricultural cropping patterns, urbanization, industrialization, hydropower generation, climate change, tourism, and anthropogenic forest fire, it has undergone a tremendous transformation in forest cover in the past three decades. The loss and degradation of forest cover results in soil erosion, loss of biodiversity including damage to wildlife habitats, and degradation of watershed areas, and deterioration of the overall quality of nature and life. The supervised classification of LANDSAT satellite data was performed to assess the changes in forest cover in Kullu valley over the years 2000 to 2020. Normalized Burn Ratio (NBR) was calculated to discriminate between burned and unburned areas of the forest. Our study reveals that in Kullu valley, the increasing number of forest fire incidents specifically, those due to anthropogenic activities has been on a rise, each subsequent year. The main objective of the present study is, therefore, to estimate the change in the forest cover of Kullu valley and to address the various social aspects responsible for the anthropogenic forest fires. Also, to assess its impact on the significant changes in the regional climatic factors, specifically, temperature, humidity, and precipitation over three decades, with the help of satellite imageries and ground data. The main outcome of the paper, we believe, will be helpful for the administration for making a quantitative assessment of the forest cover area changes due to anthropogenic activities and devising long-term measures for creating awareness among the local people of the area.Keywords: Anthropogenic Activities, Forest Change Detection, Normalized Burn Ratio (NBR), Supervised Classification
Procedia PDF Downloads 1731346 Study of Structural Behavior and Proton Conductivity of Inorganic Gel Paste Electrolyte at Various Phosphorous to Silicon Ratio by Multiscale Modelling
Authors: P. Haldar, P. Ghosh, S. Ghoshdastidar, K. Kargupta
Abstract:
In polymer electrolyte membrane fuel cells (PEMFC), the membrane electrode assembly (MEA) is consisting of two platinum coated carbon electrodes, sandwiched with one proton conducting phosphoric acid doped polymeric membrane. Due to low mechanical stability, flooding and fuel cell crossover, application of phosphoric acid in polymeric membrane is very critical. Phosphorous and silica based 3D inorganic gel gains the attention in the field of supercapacitors, fuel cells and metal hydrate batteries due to its thermally stable highly proton conductive behavior. Also as a large amount of water molecule and phosphoric acid can easily get trapped in Si-O-Si network cavities, it causes a prevention in the leaching out. In this study, we have performed molecular dynamics (MD) simulation and first principle calculations to understand the structural, electronics and electrochemical and morphological behavior of this inorganic gel at various P to Si ratios. We have used dipole-dipole interactions, H bonding, and van der Waals forces to study the main interactions between the molecules. A 'structure property-performance' mapping is initiated to determine optimum P to Si ratio for best proton conductivity. We have performed the MD simulations at various temperature to understand the temperature dependency on proton conductivity. The observed results will propose a model which fits well with experimental data and other literature values. We have also studied the mechanism behind proton conductivity. And finally we have proposed a structure for the gel paste with optimum P to Si ratio.Keywords: first principle calculation, molecular dynamics simulation, phosphorous and silica based 3D inorganic gel, polymer electrolyte membrane fuel cells, proton conductivity
Procedia PDF Downloads 1291345 Pill-Box Dispenser as a Strategy for Therapeutic Management: A Qualitative Evaluation
Authors: Bruno R. Mendes, Francisco J. Caldeira, Rita S. Luís
Abstract:
Population ageing is directly correlated to an increase in medicine consumption. Beyond the latter and the polymedicated profile of elderly, it is possible to see a need for pharmacotherapeutic monitoring due to cognitive and physical impairment. In this sense, the tracking, organization and administration of medicines become a daily challenge and the pill-box dispenser system a solution. The pill-box dispenser (system) consists in a small compartmentalized container to unit dose organization, which means a container able to correlate the patient’s prescribed dose regimen and the time schedule of intake. In many European countries, this system is part of pharmacist’s role in clinical pharmacy. Despite this simple solution, therapy compliance is only possible if the patient adheres to the system, so it is important to establish a qualitative and quantitative analysis on the perception of the patient on the benefits and risks of the pill-box dispenser as well as the identification of the ideal system. The analysis was conducted through an observational study, based on the application of a standardized questionnaire structured with the numerical scale of Likert (5 levels) and previously validated on the population. The study was performed during a limited period of time and under a randomized sample of 188 participants. The questionnaire consisted of 22 questions: 6 background measures and 16 specific measures. The standards for the final comparative analysis were obtained through the state-of-the-art on the subject. The study carried out using the Likert scale afforded a degree of agreement and discordance between measures (Sample vs. Standard) of 56,25% and 43,75%, respectively. It was concluded that the pill-box dispenser has greater acceptance among a younger population, that was not the initial target of the system. However, this allows us to guarantee a high adherence in the future. Additionally, it was noted that the cost associated with this service is not a limiting factor for its use. The pill-box dispenser system, as currently implemented, demonstrates an important weakness regarding the quality and effectiveness of the medicines, which is not understood by the patient, revealing a significant lack of literacy when it concerns with medicine area. The characteristics of an ideal system remain unchanged, which means that the size, appearance and availability of information in the pill-box continue to be indispensable elements for the compliance with the system. The pill-box dispenser remains unsuitable regarding container size and the type of treatment to which it applies. Despite that, it might be a future standard for clinical pharmacy, allowing a differentiation of the pharmacist role, as well as a wider range of applications to other age groups and treatments.Keywords: clinical pharmacy, medicines, patient safety, pill-box dispenser
Procedia PDF Downloads 1971344 A Comprehensive Study on Freshwater Aquatic Life Health Quality Assessment Using Physicochemical Parameters and Planktons as Bio Indicator in a Selected Region of Mahaweli River in Kandy District, Sri Lanka
Authors: S. M. D. Y. S. A. Wijayarathna, A. C. A. Jayasundera
Abstract:
Mahaweli River is the longest and largest river in Sri Lanka and it is the major drinking water source for a large portion of 2.5 million inhabitants in the Central Province. The aim of this study was to the determination of water quality and aquatic life health quality in a selected region of Mahaweli River. Six sampling locations (Site 1: 7° 16' 50" N, 80° 40' 00" E; Site 2: 7° 16' 34" N, 80° 40' 27" E; Site 3: 7° 16' 15" N, 80° 41' 28" E; Site 4: 7° 14' 06" N, 80° 44' 36" E; Site 5: 7° 14' 18" N, 80° 44' 39" E; Site 6: 7° 13' 32" N, 80° 46' 11" E) with various anthropogenic activities at bank of the river were selected for a period of three months from Tennekumbura Bridge to Victoria Reservoir. Temperature, pH, Electrical Conductivity (EC), Total Dissolved Solids (TDS), Dissolved Oxygen (DO), 5-day Biological Oxygen Demand (BOD5), Total Suspended Solids (TSS), hardness, the concentration of anions, and metal concentration were measured according to the standard methods, as physicochemical parameters. Planktons were considered as biological parameters. Using a plankton net (20 µm mesh size), surface water samples were collected into acid washed dried vials and were stored in an ice box during transportation. Diversity and abundance of planktons were identified within 4 days of sample collection using standard manuals of plankton identification under the light microscope. Almost all the measured physicochemical parameters were within the CEA standards limits for aquatic life, Sri Lanka Standards (SLS) or World Health Organization’s Guideline for drinking water. Concentration of orthophosphate ranged between 0.232 to 0.708 mg L-1, and it has exceeded the standard limit of aquatic life according to CEA guidelines (0.400 mg L-1) at Site 1 and Site 2, where there is high disturbance by cultivations and close households. According to the Pearson correlation (significant correlation at p < 0.05), it is obvious that some physicochemical parameters (temperature, DO, TDS, TSS, phosphate, sulphate, chloride fluoride, and sodium) were significantly correlated to the distribution of some plankton species such as Aulocoseira, Navicula, Synedra, Pediastrum, Fragilaria, Selenastrum, Oscillataria, Tribonema and Microcystis. Furthermore, species that appear in blooms (Aulocoseira), organic pollutants (Navicula), and phosphate high eutrophic water (Microcystis) were found, indicating deteriorated water quality in Mahaweli River due to agricultural activities, solid waste disposal, and release of domestic effluents. Therefore, it is necessary to improve environmental monitoring and management to control the further deterioration of water quality of the river.Keywords: bio indicator, environmental variables, planktons, physicochemical parameters, water quality
Procedia PDF Downloads 1061343 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction
Procedia PDF Downloads 961342 Safety Tolerance Zone for Driver-Vehicle-Environment Interactions under Challenging Conditions
Authors: Matjaž Šraml, Marko Renčelj, Tomaž Tollazzi, Chiara Gruden
Abstract:
Road safety is a worldwide issue with numerous and heterogeneous factors influencing it. On the side, driver state – comprising distraction/inattention, fatigue, drowsiness, extreme emotions, and socio-cultural factors highly affect road safety. On the other side, the vehicle state has an important role in mitigating (or not) the road risk. Finally, the road environment is still one of the main determinants of road safety, defining driving task complexity. At the same time, thanks to technological development, a lot of detailed data is easily available, creating opportunities for the detection of driver state, vehicle characteristics and road conditions and, consequently, for the design of ad hoc interventions aimed at improving driver performance, increase awareness and mitigate road risks. This is the challenge faced by the i-DREAMS project. i-DREAMS, which stands for a smart Driver and Road Environment Assessment and Monitoring System, is a 3-year project funded by the European Union’s Horizon 2020 research and innovation program. It aims to set up a platform to define, develop, test and validate a ‘Safety Tolerance Zone’ to prevent drivers from getting too close to the boundaries of unsafe operation by mitigating risks in real-time and after the trip. After the definition and development of the Safety Tolerance Zone concept and the concretization of the same in an Advanced driver-assistance system (ADAS) platform, the system was tested firstly for 2 months in a driving simulator environment in 5 different countries. After that, naturalistic driving studies started for a 10-month period (comprising a 1-month pilot study, 3-month baseline study and 6 months study implementing interventions). Currently, the project team has approved a common evaluation approach, and it is developing the assessment of the usage and outcomes of the i-DREAMS system, which is turning positive insights. The i-DREAMS consortium consists of 13 partners, 7 engineering universities and research groups, 4 industry partners and 2 partners (European Transport Safety Council - ETSC - and POLIS cities and regions for transport innovation) closely linked to transport safety stakeholders, covering 8 different countries altogether.Keywords: advanced driver assistant systems, driving simulator, safety tolerance zone, traffic safety
Procedia PDF Downloads 681341 Remote Sensing-Based Prediction of Asymptomatic Rice Blast Disease Using Hyperspectral Spectroradiometry and Spectral Sensitivity Analysis
Authors: Selvaprakash Ramalingam, Rabi N. Sahoo, Dharmendra Saraswat, A. Kumar, Rajeev Ranjan, Joydeep Mukerjee, Viswanathan Chinnasamy, K. K. Chaturvedi, Sanjeev Kumar
Abstract:
Rice is one of the most important staple food crops in the world. Among the various diseases that affect rice crops, rice blast is particularly significant, causing crop yield and economic losses. While the plant has defense mechanisms in place, such as chemical indicators (proteins, salicylic acid, jasmonic acid, ethylene, and azelaic acid) and resistance genes in certain varieties that can protect against diseases, susceptible varieties remain vulnerable to these fungal diseases. Early prediction of rice blast (RB) disease is crucial, but conventional techniques for early prediction are time-consuming and labor-intensive. Hyperspectral remote sensing techniques hold the potential to predict RB disease at its asymptomatic stage. In this study, we aimed to demonstrate the prediction of RB disease at the asymptomatic stage using non-imaging hyperspectral ASD spectroradiometer under controlled laboratory conditions. We applied statistical spectral discrimination theory to identify unknown spectra of M. Oryzae, the fungus responsible for rice blast disease. The infrared (IR) region was found to be significantly affected by RB disease. These changes may result in alterations in the absorption, reflection, or emission of infrared radiation by the affected plant tissues. Our research revealed that the protein spectrum in the IR region is impacted by RB disease. In our study, we identified strong correlations in the region (Amide group - I) around X 1064 nm and Y 1300 nm with the Lambda / Lambda derived spectra methods for protein detection. During the stages when the disease is developing, typically from day 3 to day 5, the plant's defense mechanisms are not as effective. This is especially true for the PB-1 variety of rice, which is highly susceptible to rice blast disease. Consequently, the proteins in the plant are adversely affected during this critical time. The spectral contour plot reveals the highly correlated spectral regions 1064 nm and Y 1300 nm associated with RB disease infection. Based on these spectral sensitivities, we developed new spectral disease indices for predicting different stages of disease emergence. The goal of this research is to lay the foundation for future UAV and satellite-based studies aimed at long-term monitoring of RB disease.Keywords: rice blast, asymptomatic stage, spectral sensitivity, IR
Procedia PDF Downloads 861340 Comparison of Blockchain Ecosystem for Identity Management
Authors: K. S. Suganya, R. Nedunchezhian
Abstract:
In recent years, blockchain technology has been found to be the most significant discovery in this digital era, after the discovery of the Internet and Cloud Computing. Blockchain is a simple, distributed public ledger that contains all the user’s transaction details in a block. The global copy of the block is then shared among all its peer-peer network users after validation by the Blockchain miners. Once a block is validated and accepted, it cannot be altered by any users making it a trust-free transaction. It also resolves the problem of double-spending by using traditional cryptographic methods. Since the advent of bitcoin, blockchain has been the backbone for all its transactions. But in recent years, it has found its roots and uses in many fields like Smart Contracts, Smart City management, healthcare, etc. Identity management against digital identity theft has become a major concern among financial and other organizations. To solve this digital identity theft, blockchain technology can be employed with existing identity management systems, which maintain a distributed public ledger containing details of an individual’s identity containing information such as Digital birth certificates, Citizenship number, Bank details, voter details, driving license in the form of blocks verified on the blockchain becomes time-stamped, unforgeable and publicly visible for any legitimate users. The main challenge in using blockchain technology to prevent digital identity theft is ensuring the pseudo-anonymity and privacy of the users. This survey paper will exert to study the blockchain concepts, consensus protocols, and various blockchain-based Digital Identity Management systems with their research scope. This paper also discusses the role of Blockchain in COVID-19 pandemic management by self-sovereign identity and supply chain management.Keywords: blockchain, consensus protocols, bitcoin, identity theft, digital identity management, pandemic, COVID-19, self-sovereign identity
Procedia PDF Downloads 1301339 Research on Intercity Travel Mode Choice Behavior Considering Traveler’s Heterogeneity and Psychological Latent Variables
Authors: Yue Huang, Hongcheng Gan
Abstract:
The new urbanization pattern has led to a rapid growth in demand for short-distance intercity travel, and the emergence of new travel modes has also increased the variety of intercity travel options. In previous studies on intercity travel mode choice behavior, the impact of functional amenities of travel mode and travelers’ long-term personality characteristics has rarely been considered, and empirical results have typically been calibrated using revealed preference (RP) or stated preference (SP) data. This study designed a questionnaire that combines the RP and SP experiment from the perspective of a trip chain combining inner-city and intercity mobility, with consideration for the actual condition of the Huainan-Hefei traffic corridor. On the basis of RP/SP fusion data, a hybrid choice model considering both random taste heterogeneity and psychological characteristics was established to investigate travelers’ mode choice behavior for traditional train, high-speed rail, intercity bus, private car, and intercity online car-hailing. The findings show that intercity time and cost exert the greatest influence on mode choice, with significant heterogeneity across the population. Although inner-city cost does not demonstrate a significant influence, inner-city time plays an important role. Service attributes of travel mode, such as catering and hygiene services, as well as free wireless network supply, only play a minor role in mode selection. Finally, our study demonstrates that safety-seeking tendency, hedonism, and introversion all have differential and significant effects on intercity travel mode choice.Keywords: intercity travel mode choice, stated preference survey, hybrid choice model, RP/SP fusion data, psychological latent variable, heterogeneity
Procedia PDF Downloads 1111338 New Platform of Biobased Aromatic Building Blocks for Polymers
Authors: Sylvain Caillol, Maxence Fache, Bernard Boutevin
Abstract:
Recent years have witnessed an increasing demand on renewable resource-derived polymers owing to increasing environmental concern and restricted availability of petrochemical resources. Thus, a great deal of attention was paid to renewable resources-derived polymers and to thermosetting materials especially, since they are crosslinked polymers and thus cannot be recycled. Also, most of thermosetting materials contain aromatic monomers, able to confer high mechanical and thermal properties to the network. Therefore, the access to biobased, non-harmful, and available aromatic monomers is one of the main challenges of the years to come. Starting from phenols available in large volumes from renewable resources, our team designed platforms of chemicals usable for the synthesis of various polymers. One of these phenols, vanillin, which is readily available from lignin, was more specifically studied. Various aromatic building blocks bearing polymerizable functions were synthesized: epoxy, amine, acid, carbonate, alcohol etc. These vanillin-based monomers can potentially lead to numerous polymers. The example of epoxy thermosets was taken, as there is also the problematic of bisphenol A substitution for these polymers. Materials were prepared from the biobased epoxy monomers obtained from vanillin. Their thermo-mechanical properties were investigated and the effect of the monomer structure was discussed. The properties of the materials prepared were found to be comparable to the current industrial reference, indicating a potential replacement of petrosourced, bisphenol A-based epoxy thermosets by biosourced, vanillin-based ones. The tunability of the final properties was achieved through the choice of monomer and through a well-controlled oligomerization reaction of these monomers. This follows the same strategy than the one currently used in industry, which supports the potential of these vanillin-derived epoxy thermosets as substitutes of their petro-based counterparts.Keywords: lignin, vanillin, epoxy, amine, carbonate
Procedia PDF Downloads 2321337 Effect of a GABA/5-HTP Mixture on Behavioral Changes and Biomodulation in an Invertebrate Model
Authors: Kyungae Jo, Eun Young Kim, Byungsoo Shin, Kwang Soon Shin, Hyung Joo Suh
Abstract:
Gamma-aminobutyric acid (GABA) and 5-hydroxytryptophan (5-HTP) are amino acids of digested nutrients or food ingredients and these can possibly be utilized as non-pharmacologic treatment for sleep disorder. We previously investigated the GABA/5-HTP mixture is the principal concept of sleep-promoting and activity-repressing management in nervous system of D. melanogaster. Two experiments in this study were designed to evaluate sleep-promoting effect of GABA/5-HTP mixture, to clarify the possible ratio of sleep-promoting action in the Drosophila invertebrate model system. Behavioral assays were applied to investigate distance traveled, velocity, movement, mobility, turn angle, angular velocity and meander of two amino acids and GABA/5-HTP mixture with caffeine treated flies. In addition, differentially expressed gene (DEG) analyses from next generation sequencing (NGS) were applied to investigate the signaling pathway and functional interaction network of GABA/5-HTP mixture administration. GABA/5-HTP mixture resulted in significant differences between groups related to behavior (p < 0.01) and significantly induced locomotor activity in the awake model (p < 0.05). As a result of the sequencing, the molecular function of various genes has relationship with motor activity and biological regulation. These results showed that GABA/5-HTP mixture administration significantly involved the inhibition of motor behavior. In this regard, we successfully demonstrated that using a GABA/5-HTP mixture modulates locomotor activity to a greater extent than single administration of each amino acid, and that this modulation occurs via the neuronal system, neurotransmitter release cycle and transmission across chemical synapses.Keywords: sleep, γ-aminobutyric acid, 5-hydroxytryptophan, Drosophila melanogaster
Procedia PDF Downloads 3091336 Generalized Additive Model for Estimating Propensity Score
Authors: Tahmidul Islam
Abstract:
Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching
Procedia PDF Downloads 3681335 Empirical Superpave Mix-Design of Rubber-Modified Hot-Mix Asphalt in Railway Sub-Ballast
Authors: Fernando M. Soto, Gaetano Di Mino
Abstract:
The design of an unmodified bituminous mixture and three rubber-aggregate mixtures containing rubber-aggregate by a dry process (RUMAC) was evaluated, using an empirical-analytical approach based on experimental findings obtained in the laboratory with the volumetric mix design by gyratory compaction. A reference dense-graded bituminous sub-ballast mixture (3% of air voids and a bitumen 4% over the total weight of the mix), and three rubberized mixtures by dry process (1,5 to 3% of rubber by total weight and 5-7% of binder) were used applying the Superpave mix-design for a level 3 (high-traffic) design rail lines. The railway trackbed section analyzed was a granular layer of 19 cm compacted, while for the sub-ballast a thickness of 12 cm has been used. In order to evaluate the effect of increasing the specimen density (as a percent of its theoretical maximum specific gravity), in this article, are illustrated the results obtained after different comparative analysis into the influence of varying the binder-rubber percentages under the sub-ballast layer mix-design. This work demonstrates that rubberized blends containing crumb and ground rubber in bituminous asphalt mixtures behave at least similar or better than conventional asphalt materials. By using the same methodology of volumetric compaction, the densification curves resulting from each mixture have been studied. The purpose is to obtain an optimum empirical parameter multiplier of the number of gyrations necessary to reach the same compaction energy as in conventional mixtures. It has provided some experimental parameters adopting an empirical-analytical method, evaluating the results obtained from the gyratory-compaction of bituminous mixtures with an HMA and rubber-aggregate blends. An extensive integrated research has been carried out to assess the suitability of rubber-modified hot mix asphalt mixtures as a sub-ballast layer in railway underlayment trackbed. Design optimization of the mixture was conducted for each mixture and the volumetric properties analyzed. Also, an improved and complete manufacturing process, compaction and curing of these blends are provided. By adopting this increase-parameters of compaction, called 'beta' factor, mixtures modified with rubber with uniform densification and workability are obtained that in the conventional mixtures. It is found that considering the usual bearing capacity requirements in rail track, the optimal rubber content is 2% (by weight) or 3.95% (by volumetric substitution) and a binder content of 6%.Keywords: empirical approach, rubber-asphalt, sub-ballast, superpave mix-design
Procedia PDF Downloads 368