Search results for: information matrix
8924 Exploring the Activity Fabric of an Intelligent Environment with Hierarchical Hidden Markov Theory
Authors: Chiung-Hui Chen
Abstract:
The Internet of Things (IoT) was designed for widespread convenience. With the smart tag and the sensing network, a large quantity of dynamic information is immediately presented in the IoT. Through the internal communication and interaction, meaningful objects provide real-time services for users. Therefore, the service with appropriate decision-making has become an essential issue. Based on the science of human behavior, this study employed the environment model to record the time sequences and locations of different behaviors and adopted the probability module of the hierarchical Hidden Markov Model for the inference. The statistical analysis was conducted to achieve the following objectives: First, define user behaviors and predict the user behavior routes with the environment model to analyze user purposes. Second, construct the hierarchical Hidden Markov Model according to the logic framework, and establish the sequential intensity among behaviors to get acquainted with the use and activity fabric of the intelligent environment. Third, establish the intensity of the relation between the probability of objects’ being used and the objects. The indicator can describe the possible limitations of the mechanism. As the process is recorded in the information of the system created in this study, these data can be reused to adjust the procedure of intelligent design services.Keywords: behavior, big data, hierarchical hidden Markov model, intelligent object
Procedia PDF Downloads 2338923 Game Structure and Spatio-Temporal Action Detection in Soccer Using Graphs and 3D Convolutional Networks
Authors: Jérémie Ochin
Abstract:
Soccer analytics are built on two data sources: the frame-by-frame position of each player on the terrain and the sequences of events, such as ball drive, pass, cross, shot, throw-in... With more than 2000 ball-events per soccer game, their precise and exhaustive annotation, based on a monocular video stream such as a TV broadcast, remains a tedious and costly manual task. State-of-the-art methods for spatio-temporal action detection from a monocular video stream, often based on 3D convolutional neural networks, are close to reach levels of performances in mean Average Precision (mAP) compatibles with the automation of such task. Nevertheless, to meet their expectation of exhaustiveness in the context of data analytics, such methods must be applied in a regime of high recall – low precision, using low confidence score thresholds. This setting unavoidably leads to the detection of false positives that are the product of the well documented overconfidence behaviour of neural networks and, in this case, their limited access to contextual information and understanding of the game: their predictions are highly unstructured. Based on the assumption that professional soccer players’ behaviour, pose, positions and velocity are highly interrelated and locally driven by the player performing a ball-action, it is hypothesized that the addition of information regarding surrounding player’s appearance, positions and velocity in the prediction methods can improve their metrics. Several methods are compared to build a proper representation of the game surrounding a player, from handcrafted features of the local graph, based on domain knowledge, to the use of Graph Neural Networks trained in an end-to-end fashion with existing state-of-the-art 3D convolutional neural networks. It is shown that the inclusion of information regarding surrounding players helps reaching higher metrics.Keywords: fine-grained action recognition, human action recognition, convolutional neural networks, graph neural networks, spatio-temporal action recognition
Procedia PDF Downloads 248922 A Hybrid Multi-Criteria Hotel Recommender System Using Explicit and Implicit Feedbacks
Authors: Ashkan Ebadi, Adam Krzyzak
Abstract:
Recommender systems, also known as recommender engines, have become an important research area and are now being applied in various fields. In addition, the techniques behind the recommender systems have been improved over the time. In general, such systems help users to find their required products or services (e.g. books, music) through analyzing and aggregating other users’ activities and behavior, mainly in form of reviews, and making the best recommendations. The recommendations can facilitate user’s decision making process. Despite the wide literature on the topic, using multiple data sources of different types as the input has not been widely studied. Recommender systems can benefit from the high availability of digital data to collect the input data of different types which implicitly or explicitly help the system to improve its accuracy. Moreover, most of the existing research in this area is based on single rating measures in which a single rating is used to link users to items. This paper proposes a highly accurate hotel recommender system, implemented in various layers. Using multi-aspect rating system and benefitting from large-scale data of different types, the recommender system suggests hotels that are personalized and tailored for the given user. The system employs natural language processing and topic modelling techniques to assess the sentiment of the users’ reviews and extract implicit features. The entire recommender engine contains multiple sub-systems, namely users clustering, matrix factorization module, and hybrid recommender system. Each sub-system contributes to the final composite set of recommendations through covering a specific aspect of the problem. The accuracy of the proposed recommender system has been tested intensively where the results confirm the high performance of the system.Keywords: tourism, hotel recommender system, hybrid, implicit features
Procedia PDF Downloads 2728921 The Impact of PM-Based Regulations on the Concentration and Sources of Fine Organic Carbon in the Los Angeles Basin from 2005 to 2015
Authors: Abdulmalik Altuwayjiri, Milad Pirhadi, Sina Taghvaee, Constantinos Sioutas
Abstract:
A significant portion of PM₂.₅ mass concentration is carbonaceous matter (CM), which majorly exists in the form of organic carbon (OC). Ambient OC originates from a multitude of sources and plays an important role in global climate effects, visibility degradation, and human health. In this study, positive matrix factorization (PMF) was utilized to identify and quantify the long-term contribution of PM₂.₅ sources to total OC mass concentration in central Los Angeles (CELA) and Riverside (i.e., receptor site), using the chemical speciation network (CSN) database between 2005 and 2015, a period during which several state and local regulations on tailpipe emissions were implemented in the area. Our PMF resolved five different factors, including tailpipe emissions, non-tailpipe emissions, biomass burning, secondary organic aerosol (SOA), and local industrial activities for both sampling sites. The contribution of vehicular exhaust emissions to the OC mass concentrations significantly decreased from 3.5 µg/m³ in 2005 to 1.5 µg/m³ in 2015 (by about 58%) at CELA, and from 3.3 µg/m³ in 2005 to 1.2 µg/m³ in 2015 (by nearly 62%) at Riverside. Additionally, SOA contribution to the total OC mass, showing higher levels at the receptor site, increased from 23% in 2005 to 33% and 29% in 2010 and 2015, respectively, in Riverside, whereas the corresponding contribution at the CELA site was 16%, 21% and 19% during the same period. The biomass burning maintained an almost constant relative contribution over the whole period. Moreover, while the adopted regulations and policies were very effective at reducing the contribution of tailpipe emissions, they have led to an overall increase in the fractional contributions of non-tailpipe emissions to total OC in CELA (about 14%, 28%, and 28% in 2005, 2010 and 2015, respectively) and Riverside (22%, 27% and 26% in 2005, 2010 and 2015), underscoring the necessity to develop equally effective mitigation policies targeting non-tailpipe PM emissions.Keywords: PM₂.₅, organic carbon, Los Angeles megacity, PMF, source apportionment, non-tailpipe emissions
Procedia PDF Downloads 1988920 Technology for Enhancing the Learning and Teaching Experience in Higher Education
Authors: Sara M. Ismael, Ali H. Al-Badi
Abstract:
The rapid development and growth of technology has changed the method of obtaining information for educators and learners. Technology has created a new world of collaboration and communication among people. Incorporating new technology into the teaching process can enhance learning outcomes. Billions of individuals across the world are now connected together, and are cooperating and contributing their knowledge and intelligence. Time is no longer wasted in waiting until the teacher is ready to share information as learners can go online and get it immediately. The objectives of this paper are to understand the reasons why changes in teaching and learning methods are necessary, to find ways of improving them, and to investigate the challenges that present themselves in the adoption of new ICT tools in higher education institutes. To achieve these objectives two primary research methods were used: questionnaires, which were distributed among students at higher educational institutes and multiple interviews with faculty members (teachers) from different colleges and universities, which were conducted to find out why teaching and learning methodology should change. The findings show that both learners and educators agree that educational technology plays a significant role in enhancing instructors’ teaching style and students’ overall learning experience; however, time constraints, privacy issues, and not being provided with enough up-to-date technology do create some challenges.Keywords: e-books, educational technology, educators, e-learning, learners, social media, Web 2.0, LMS
Procedia PDF Downloads 2768919 Examining the Relationship between Concussion and Neurodegenerative Disorders: A Review on Amyotrophic Lateral Sclerosis and Alzheimer’s Disease
Authors: Edward Poluyi, Eghosa Morgan, Charles Poluyi, Chibuikem Ikwuegbuenyi, Grace Imaguezegie
Abstract:
Background: Current epidemiological studies have examined the associations between moderate and severe traumatic brain injury (TBI) and their risks of developing neurodegenerative diseases. Concussion, also known as mild TBI (mTBI), is however quite distinct from moderate or severe TBIs. Only few studies in this burgeoning area have examined concussion—especially repetitive episodes—and neurodegenerative diseases. Thus, no definite relationship has been established between them. Objectives : This review will discuss the available literature linking concussion and amyotrophic lateral sclerosis (ALS) and Alzheimer’s disease (AD). Materials and Methods: Given the complexity of this subject, a realistic review methodology was selected which includes clarifying the scope and developing a theoretical framework, developing a search strategy, selection and appraisal, data extraction, and synthesis. A detailed literature matrix was set out in order to get relevant and recent findings on this topic. Results: Presently, there is no objective clinical test for the diagnosis of concussion because the features are less obvious on physical examination. Absence of an objective test in diagnosing concussion sometimes leads to skepticism when confirming the presence or absence of concussion. Intriguingly, several possible explanations have been proposed in the pathological mechanisms that lead to the development of some neurodegenerative disorders (such as ALS and AD) and concussion but the two major events are deposition of tau proteins (abnormal microtubule proteins) and neuroinflammation, which ranges from glutamate excitotoxicity pathways and inflammatory pathways (which leads to a rise in the metabolic demands of microglia cells and neurons), to mitochondrial function via the oxidative pathways.Keywords: amyotrophic lateral sclerosis, Alzheimer's disease, mild traumatic brain injury, neurodegeneration
Procedia PDF Downloads 898918 Development of a Shape Based Estimation Technology Using Terrestrial Laser Scanning
Authors: Gichun Cha, Byoungjoon Yu, Jihwan Park, Minsoo Park, Junghyun Im, Sehwan Park, Sujung Sin, Seunghee Park
Abstract:
The goal of this research is to estimate a structural shape change using terrestrial laser scanning. This study proceeds with development of data reduction and shape change estimation algorithm for large-capacity scan data. The point cloud of scan data was converted to voxel and sampled. Technique of shape estimation is studied to detect changes in structure patterns, such as skyscrapers, bridges, and tunnels based on large point cloud data. The point cloud analysis applies the octree data structure to speed up the post-processing process for change detection. The point cloud data is the relative representative value of shape information, and it used as a model for detecting point cloud changes in a data structure. Shape estimation model is to develop a technology that can detect not only normal but also immediate structural changes in the event of disasters such as earthquakes, typhoons, and fires, thereby preventing major accidents caused by aging and disasters. The study will be expected to improve the efficiency of structural health monitoring and maintenance.Keywords: terrestrial laser scanning, point cloud, shape information model, displacement measurement
Procedia PDF Downloads 2358917 The Impact of Digital Transformation on the Construction Industry in Kuwait
Authors: M. Aladwani, Y. Alarfaj
Abstract:
The construction industry is currently experiencing a shift towards digitisation. This transformation is driven by adopting technologies like Building Information Modelling (BIM), drones, and augmented reality (AR). These advancements are revolutionizing the process of designing, constructing, and operating projects. BIM, for instance, is a new way of communicating and exploiting technology such as software and machinery. It enables the creation of a replica or virtual model of buildings or infrastructure projects. It facilitates simulating construction procedures, identifying issues beforehand, and optimizing designs accordingly. Drones are another tool in this revolution, as they can be utilized for site surveys, inspections, and even deliveries. Moreover, AR technology provides real-time information to workers involved in the project. Implementing these technologies in the construction industry has brought about improvements in efficiency, safety measures, and sustainable practices. BIM helps minimize rework and waste materials, while drones contribute to safety by reducing workers' exposure to areas. Additionally, AR plays a role in worker safety by delivering instructions and guidance during operations. Although the digital transformation within the construction industry is still in its early stages, it holds the potential to reshape project delivery methods entirely. By embracing these technologies, construction companies can boost their profitability while simultaneously reducing their environmental impact and ensuring safer practices.Keywords: BIM, digital construction, construction technologies, digital transformation
Procedia PDF Downloads 868916 Performance of On-site Earthquake Early Warning Systems for Different Sensor Locations
Authors: Ting-Yu Hsu, Shyu-Yu Wu, Shieh-Kung Huang, Hung-Wei Chiang, Kung-Chun Lu, Pei-Yang Lin, Kuo-Liang Wen
Abstract:
Regional earthquake early warning (EEW) systems are not suitable for Taiwan, as most destructive seismic hazards arise due to in-land earthquakes. These likely cause the lead-time provided by regional EEW systems before a destructive earthquake wave arrives to become null. On the other hand, an on-site EEW system can provide more lead-time at a region closer to an epicenter, since only seismic information of the target site is required. Instead of leveraging the information of several stations, the on-site system extracts some P-wave features from the first few seconds of vertical ground acceleration of a single station and performs a prediction of the oncoming earthquake intensity at the same station according to these features. Since seismometers could be triggered by non-earthquake events such as a passing of a truck or other human activities, to reduce the likelihood of false alarms, a seismometer was installed at three different locations on the same site and the performance of the EEW system for these three sensor locations were discussed. The results show that the location on the ground of the first floor of a school building maybe a good choice, since the false alarms could be reduced and the cost for installation and maintenance is the lowest.Keywords: earthquake early warning, on-site, seismometer location, support vector machine
Procedia PDF Downloads 2448915 A Situational Awareness Map for Allocating Relief Resources after Earthquake Occurrence
Authors: Hamid Reza Ranjbar, Ali Reza Azmoude Ardalan, Hamid Dehghani, Mohammad Reza Sarajian
Abstract:
Natural disasters are unexpected events which predicting them is difficult. Earthquake is one of the most devastating disasters among natural hazards with high rate of mortality and wide extent of damages. After the earthquake occurrence, managing the critical condition and allocating limited relief sources requiring a complete awareness of damaged area. The information for allocating relief teams should be precise and reliable as much as possible, and be presented in the appropriate time after the earthquake occurrence. This type of information was previously presented in the form of a damage map; conducting relief teams by using damage map mostly lead to waste of time for finding alive occupants under the rubble. In this research, a proposed standard for prioritizing damaged buildings in terms of requiring rescue and relief was presented. This standard prioritizes damaged buildings into four levels of priority including very high, high, moderate and low by considering key parameters such as type of land use, activity time, and inactivity time of each land use, time of earthquake occurrence and distinct index. The priority map by using the proposed standard could be a basis for guiding relief teams towards the areas with high relief priority.Keywords: Damage map, GIS, priority map, USAR
Procedia PDF Downloads 4048914 Power of Intuition: An Inner Faculty of Mind
Authors: Rohan Shinde, Shreya Chugh
Abstract:
Imagine a world where innovation is natural and not unusual. Imagine a world that works on inner wisdom rather than just information. Children live in such a world which is full of possibilities. If they learn to listen to their own intuition, genius would be common. We all are born with a natural intuitive ability to perceive beyond our senses. This is especially visible in children whose minds are still fresh, less obsessive and more in tune with nature. As we grow older, our modern lifestyle overloads with information and stresses our mind which obscures this innate intuitive capacity. The Art of Living Prajñā Yoga (Intuition Process), a 2-day program introduced for kids and teenagers between 5-18 years of age helps to kindle this intuitive ability and build confidence to act on their gut feeling. This program helps them to tap into the intuitive abilities of the mind, which is demonstrated by them seeing colors, reading text and identifying pictures with eyes closed. To make these faculties blossom and get more established, the mind needs proper nurturing and nourishment which is done in the Intuition Process. A research study has been conducted to measure these abilities manifested in students who have this program on different parameters such as confidence level, clarity of mind, problem solving skills, focus, increase in overall performance etc. The results have been plotted on the graph and conclusions are made on effectiveness of intuition process. Experience of few students with special abilities have also been documented.Keywords: Abilities, Art of Living, Intuition, Mind
Procedia PDF Downloads 2218913 Relationship between Matrilin-3 (MATN-3) Gene Single Nucleotide Six Polymorphism, Transforming Growth Factor Beta 2 and Radiographic Grading in Primary Osteoarthritis
Authors: Heba Esaily, Rawhia Eledl, Daila Aboelela, Rasha Noreldin
Abstract:
Objective: Assess serum level of Transforming growth factor beta 2 (TGF-β2) and Matrilin-3 (MATN3) SNP6 polymorphism in osteoarthritic patients Background: Osteoarthritis (OA) is a musculoskeletal disease characterized by pain and joint stiffness. TGF-β 2 is involved in chondrogenesis and osteogenesis, It has found that MATN3 gene and protein expression was correlated with the extent of tissue damage in OA. Findings suggest that regulation of MATN3 expression is essential for maintenance of the cartilage extracellular matrix microenvironment Subjects and Methods: 72 cases of primary OA (56 with knee OA and 16 with generalized OA were compared with that of 18 healthy controls. Radiographs were scored with the Kellgren-Lawrence scale. Serum TGF-β2 was measured by using (ELISA), levels of marker were correlated to radiographic grading of disease and MATN3 SNP6 polymorphism was determined by (PCR-RFLP). Results: MATN3 SNP6 polymorphism and serum level of TGF-β2 were higher in OA compared with controls. Genotype, NN and N allele frequency were higher in patients with OA compared with controls. NN genotype and N allele frequency were higher in knee osteoarthritis than generalized OA. Significant positive correlation between level of TGFβ2 and radiographic grading in group with knee OA, but no correlation between serum level of TGFβ2 and radiographic grading in generalized OA. Conclusion: MATN3 SNP6 polymorphism and TGF-β2 implicated in the pathogenesis of osteoarthritis. Association of N/N genotype with primary osteoarthritis emphasizes on the need for prospective study include larger sample size to confirm the results of the present study.Keywords: Matrilin-3, transforming growth factor beta 2, primary osteoarthritis, knee osteoarthritis
Procedia PDF Downloads 2698912 Beyond Information Failure and Misleading Beliefs in Conditional Cash Transfer Programs: A Qualitative Account of Structural Barriers Explaining Why the Poor Do Not Invest in Human Capital in Northern Mexico
Authors: Francisco Fernandez de Castro
Abstract:
The Conditional Cash Transfer (CCT) model gives monetary transfers to beneficiary families on the condition that they take specific education and health actions. According to the economic rationale of CCTs the poor need incentives to invest in their human capital because they are trapped by a lack of information and misleading beliefs. If left to their own decision, the poor will not be able to choose what is in their best interests. The basic assumption of the CCT model is that the poor need incentives to take care of their own education and health-nutrition. Due to the incentives (income cash transfers and conditionalities), beneficiary families are supposed to attend doctor visits and health talks. Children would stay in the school. These incentivized behaviors would produce outcomes such as better health and higher level of education, which in turn will reduce poverty. Based on a grounded theory approach to conduct a two-year period of qualitative data collection in northern Mexico, this study shows that this explanation is incomplete. In addition to the information failure and inadequate beliefs, there are structural barriers in everyday life of households that make health-nutrition and education investments difficult. In-depth interviews and observation work showed that the program takes for granted local conditions in which beneficiary families should fulfill their co-responsibilities. Data challenged the program’s assumptions and unveiled local obstacles not contemplated in the program’s design. These findings have policy and research implications for the CCT agenda. They bring elements for late programming due to the gap between the CCT strategy as envisioned by policy designers, and the program that beneficiary families experience on the ground. As for research consequences, these findings suggest new avenues for scholarly work regarding the causal mechanisms and social processes explaining CCT outcomes.Keywords: conditional cash transfers, incentives, poverty, structural barriers
Procedia PDF Downloads 1138911 Linkages between Postponement Strategies and Flexibility in Organizations
Authors: Polycarpe Feussi
Abstract:
Globalization, technological and customer increasing changes, amongst other drivers, result in higher levels of uncertainty and unpredictability for organizations. In order for organizations to cope with the uncertain and fast-changing economic and business environment, these organizations need to innovate in order to achieve flexibility. In simple terms, the organizations must develop strategies leading to the ability of these organizations to provide horizontal information connections across the supply chain to create and deliver products that meet customer needs by synchronization of customer demands with product creation. The generated information will create efficiency and effectiveness throughout the whole supply chain regarding production, storage, and distribution, as well as eliminating redundant activities and reduction in response time. In an integrated supply chain, spanning activities include coordination with distributors and suppliers. This paper explains how through postponement strategies, flexibility can be achieved in an organization. In order to achieve the above, a thorough literature review was conducted via the search of online websites that contains material from scientific journal data-bases, articles, and textbooks on the subject of postponement and flexibility. The findings of the research are found in the last part of the paper. The first part introduces the concept of postponement and its importance in supply chain management. The second part of the paper provides the methodology used in the process of writing the paper.Keywords: postponement strategies, supply chain management, flexibility, logistics
Procedia PDF Downloads 1938910 Outdoor Anomaly Detection with a Spectroscopic Line Detector
Authors: O. J. G. Somsen
Abstract:
One of the tasks of optical surveillance is to detect anomalies in large amounts of image data. However, if the size of the anomaly is very small, limited information is available to distinguish it from the surrounding environment. Spectral detection provides a useful source of additional information and may help to detect anomalies with a size of a few pixels or less. Unfortunately, spectral cameras are expensive because of the difficulty of separating two spatial in addition to one spectral dimension. We investigate the possibility of modifying a simpler spectral line detector for outdoor detection. This may be especially useful if the area of interest forms a line, such as the horizon. We use a monochrome CCD that also enables detection into the near infrared. A simple camera is attached to the setup to determine which part of the environment is spectrally imaged. Our preliminary results indicate that sensitive detection of very small targets is indeed possible. Spectra could be taken from the various targets by averaging columns in the line image. By imaging a set of lines of various width we found narrow lines that could not be seen in the color image but remained visible in the spectral line image. A simultaneous analysis of the entire spectra can produce better results than visual inspection of the line spectral image. We are presently developing calibration targets for spatial and spectral focusing and alignment with the spatial camera. This will present improved results and more use in outdoor applicationKeywords: anomaly detection, spectroscopic line imaging, image analysis, outdoor detection
Procedia PDF Downloads 4818909 Design and Analysis of Adaptive Type-I Progressive Hybrid Censoring Plan under Step Stress Partially Accelerated Life Testing Using Competing Risk
Authors: Ariful Islam, Showkat Ahmad Lone
Abstract:
Statistical distributions have long been employed in the assessment of semiconductor devices and product reliability. The power function-distribution is one of the most important distributions in the modern reliability practice and can be frequently preferred over mathematically more complex distributions, such as the Weibull and the lognormal, because of its simplicity. Moreover, it may exhibit a better fit for failure data and provide more appropriate information about reliability and hazard rates in some circumstances. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests for competing risk based on adoptive type-I progressive hybrid censoring criteria. The life data of the units under test is assumed to follow Mukherjee-Islam distribution. The point and interval maximum-likelihood estimations are obtained for distribution parameters and tampering coefficient. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.Keywords: adoptive progressive hybrid censoring, competing risk, mukherjee-islam distribution, partially accelerated life testing, simulation study
Procedia PDF Downloads 3478908 An Automated Business Process Management for Smart Medical Records
Authors: K. Malak, A. Nourah, S.Liyakathunisa
Abstract:
Nowadays, healthcare services are facing many challenges since they are becoming more complex and more needed. Every detail of a patient’s interactions with health care providers is maintained in Electronic Health Records (ECR) and Healthcare information systems (HIS). However, most of the existing systems are often focused on documenting what happens in manual health care process, rather than providing the highest quality patient care. Healthcare business processes and stakeholders can no longer rely on manual processes, to provide better patient care and efficient utilization of resources, Healthcare processes must be automated wherever it is possible. In this research, a detail survey and analysis is performed on the existing health care systems in Saudi Arabia, and an automated smart medical healthcare business process model is proposed. The business process management methods and rules are followed in discovering, collecting information, analysis, redesign, implementation and performance improvement analysis in terms of time and cost. From the simulation results, it is evident that our proposed smart medical records system can improve the quality of the service by reducing the time and cost and increasing efficiencyKeywords: business process management, electronic health records, efficiency, cost, time
Procedia PDF Downloads 3418907 Designing and Implementing a Tourist-Guide Web Service Based on Volunteer Geographic Information Using Open-Source Technologies
Authors: Javad Sadidi, Ehsan Babaei, Hani Rezayan
Abstract:
The advent of web 2.0 gives a possibility to scale down the costs of data collection and mapping, specifically if the process is done by volunteers. Every volunteer can be thought of as a free and ubiquitous sensor to collect spatial, descriptive as well as multimedia data for tourist services. The lack of large-scale information, such as real-time climate and weather conditions, population density, and other related data, can be considered one of the important challenges in developing countries for tourists to make the best decision in terms of time and place of travel. The current research aims to design and implement a spatiotemporal web map service using volunteer-submitted data. The service acts as a tourist-guide service in which tourists can search interested places based on their requested time for travel. To design the service, three tiers of architecture, including data, logical processing, and presentation tiers, have been utilized. For implementing the service, open-source software programs, client and server-side programming languages (such as OpenLayers2, AJAX, and PHP), Geoserver as a map server, and Web Feature Service (WFS) standards have been used. The result is two distinct browser-based services, one for sending spatial, descriptive, and multimedia volunteer data and another one for tourists and local officials. Local official confirms the veracity of the volunteer-submitted information. In the tourist interface, a spatiotemporal search engine has been designed to enable tourists to find a tourist place based on province, city, and location at a specific time of interest. Implementing the tourist-guide service by this methodology causes the following: the current tourists participate in a free data collection and sharing process for future tourists, a real-time data sharing and accessing for all, avoiding a blind selection of travel destination and significantly, decreases the cost of providing such services.Keywords: VGI, tourism, spatiotemporal, browser-based, web mapping
Procedia PDF Downloads 988906 Mixed Effects Models for Short-Term Load Forecasting for the Spanish Regions: Castilla-Leon, Castilla-La Mancha and Andalucia
Authors: C. Senabre, S. Valero, M. Lopez, E. Velasco, M. Sanchez
Abstract:
This paper focuses on an application of linear mixed models to short-term load forecasting. The challenge of this research is to improve a currently working model at the Spanish Transport System Operator, programmed by us, and based on linear autoregressive techniques and neural networks. The forecasting system currently forecasts each of the regions within the Spanish grid separately, even though the behavior of the load in each region is affected by the same factors in a similar way. A load forecasting system has been verified in this work by using the real data from a utility. In this research it has been used an integration of several regions into a linear mixed model as starting point to obtain the information from other regions. Firstly, the systems to learn general behaviors present in all regions, and secondly, it is identified individual deviation in each regions. The technique can be especially useful when modeling the effect of special days with scarce information from the past. The three most relevant regions of the system have been used to test the model, focusing on special day and improving the performance of both currently working models used as benchmark. A range of comparisons with different forecasting models has been conducted. The forecasting results demonstrate the superiority of the proposed methodology.Keywords: short-term load forecasting, mixed effects models, neural networks, mixed effects models
Procedia PDF Downloads 1898905 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 1528904 Humoral and Cellular Immune Responses to Major Human Cytomegalovirus Antigens in Mice Model
Authors: S. Essa, H. Safar, R. Raghupathy
Abstract:
Human cytomegalovirus (CMV) continues to be a source of severe complications to immunologically immature and immune-compromised hosts. Effective CMV vaccine that diminishes CMV disease in transplant patients and avoids congenital infection remains of high importance as no approved vaccines exist. Though the exact links of defense mechanisms are unidentified, viral-specific antibodies and Th1/Th2 cytokine responses have been involved in controlling viral infections. CMV envelope glycoprotein B (UL55/gB), the matrix proteins (UL83/pp65, UL99/pp28, UL32/pp150), and the assembly protein UL80a/pp38 are known to be targets of antiviral immune responses. In this study, mice were immunized with five HCMV antigens (UL32/pp150, UL80a/pp38, UL99/pp28, and UL83/pp65), and serum samples were collected and evaluated for eliciting viral-specific antibody responses. Moreover, Splenocytes were collected, stimulated, and assessed for cytokine responses. The results demonstrated a CMV-antigen-specific antibody response to pp38 and pp65 (E/C >2.0). The highest titers were detected with pp38 (average E/C 16.275) followed by pp65 (average E/C 7.72). Compared to control cells, splenocytes from PP38 antigen immunized mice gave a significantly higher concentration of GM-CSF, IFN-γ, IL-2 IL-4, IL-5, and IL-17A (P<0.05). Also, splenocytes from pp65 antigen immunized mice resulted in a significantly higher concentration of GM-CSF, IFN-γ, IL-2 IL-4, IL-10, IL-12, IL-17A, and TNF- α. The designation of target CMV peptides by identifying viral-specific antibodies and cytokine responses is vital for understanding the protective immune mechanisms during CMV infection and identifying appropriate viral antigens to develop novel vaccines.Keywords: hepatitis C virus, peripheral blood mononuclear cells, neutrophils, cytokines
Procedia PDF Downloads 1398903 Qusai-Solid-State Electrochromic Device Based on PolyMethyl Methacrylate (PMMA)/Succinonitrile Gel Polymer Electrolyte
Authors: Jen-Yuan Wang, Min-Chuan Wang, Der-Jun Jan
Abstract:
Polymer electrolytes can be classified into four major categories, solid polymer electrolytes (SPEs), gel polymer electrolytes (GPEs), polyelectrolytes and composite polymer electrolytes. SPEs suffer from low ionic conductivity at room temperature. The main problems for GPEs are the poor thermal stability and mechanical properties. In this study, a GPE containing PMMA and succinonitrile is prepared to solve the problems mentioned above, and applied to the assembly of a quasi-solid-state electrochromic device (ECD). In the polymer electrolyte, poly(methyl methacrylate) (PMMA) is the polymer matrix and propylene carbonate (PC) is used as the plasticizer. To enhance the mechanical properties of this GPE, succinonitrile (SN) is introduced as the additive. For the electrochromic materials, tungsten oxide (WO3) is used as the cathodic coloring film, which is fabricated by pulsed dc magnetron reactive sputtering. For the anodic coloring material, Prussian blue nanoparticles (PBNPs) are synthesized and coated on the transparent Sn-doped indium oxide (ITO) glass. The thickness of ITO, WO3 and PB film is 110, 170 and 200 nm, respectively. The size of the ECD is 5×5 cm2. The effect of the introduction of SN into the GPEs is discussed by observing the electrochromic behaviors of the WO3-PB ECD. Besides, the composition ratio of PC to SN is also investigated by measuring the ionic conductivity. The optimized ratio of PC to SN is 4:1, and the ionic conductivity under this condition is 6.34x10-5 S∙cm-1, which is higher than that of PMMA/PC (1.35x10-6 S∙cm-1) and PMMA/EC/PC (4.52x10-6 S∙cm-1). This quasi-solid-state ECD fabricated with the PMMA/SN based GPE shows an optical contrast of ca. 53% at 690 nm. The optical transmittance of the ECD can be reversibly modulated from 72% (bleached) to 19% (darkened), by applying potentials of 1.5 and -2.2 V, respectively. During the durability test, the optical contrast of this ECD remains 44.5% after 2400 cycles, which is 83% of the original one.Keywords: electrochromism, tungsten oxide, prussian blue, poly(methyl methacrylate), succinonitrile
Procedia PDF Downloads 2988902 Forecasting Future Society to Explore Promising Security Technologies
Authors: Jeonghwan Jeon, Mintak Han, Youngjun Kim
Abstract:
Due to the rapid development of information and communication technology (ICT), a substantial transformation is currently happening in the society. As the range of intelligent technologies and services is continuously expanding, ‘things’ are becoming capable of communicating one another and even with people. However, such “Internet of Things” has the technical weakness so that a great amount of such information transferred in real-time may be widely exposed to the threat of security. User’s personal data are a typical example which is faced with a serious security threat. The threats of security will be diversified and arose more frequently because next generation of unfamiliar technology develops. Moreover, as the society is becoming increasingly complex, security vulnerability will be increased as well. In the existing literature, a considerable number of private and public reports that forecast future society have been published as a precedent step of the selection of future technology and the establishment of strategies for competitiveness. Although there are previous studies that forecast security technology, they have focused only on technical issues and overlooked the interrelationships between security technology and social factors are. Therefore, investigations of security threats in the future and security technology that is able to protect people from various threats are required. In response, this study aims to derive potential security threats associated with the development of technology and to explore the security technology that can protect against them. To do this, first of all, private and public reports that forecast future and online documents from technology-related communities are collected. By analyzing the data, future issues are extracted and categorized in terms of STEEP (Society, Technology, Economy, Environment, and Politics), as well as security. Second, the components of potential security threats are developed based on classified future issues. Then, points that the security threats may occur –for example, mobile payment system based on a finger scan technology– are identified. Lastly, alternatives that prevent potential security threats are proposed by matching security threats with points and investigating related security technologies from patent data. Proposed approach can identify the ICT-related latent security menaces and provide the guidelines in the ‘problem – alternative’ form by linking the threat point with security technologies.Keywords: future society, information and communication technology, security technology, technology forecasting
Procedia PDF Downloads 4688901 Study on the Effect of Pre-Operative Patient Education on Post-Operative Outcomes
Authors: Chaudhary Itisha, Shankar Manu
Abstract:
Patient satisfaction represents a crucial aspect in the evaluation of health care services. Preoperative teaching provides the patient with pertinent information concerning the surgical process and the intended surgical procedure as well as anticipated patient behavior (anxiety, fear), expected sensation, and the probable outcomes. Although patient education is part of Accreditation protocols, it is not uniform at most places. The aim of this study was to try to assess the benefit of preoperative patient education on selected post-operative outcome parameters; mainly, post-operative pain scores, requirement of additional analgesia, return to activity of daily living and overall patient satisfaction, and try to standardize few education protocols. Dependent variables were measured before and after the treatment on a study population of 302 volunteers. Educational intervention was provided by the Investigator in the preoperative period to the study group through personal counseling. An information booklet contained detailed information was also provided. Statistical Analysis was done using Chi square test, Mann Whitney u test and Fischer Exact Test on a total of 302 subjects. P value <0.05 was considered as level of statistical significance and p<0.01 was considered as highly significant. This study suggested that patients who are given a structured, individualized and elaborate preoperative education and counseling have a better ability to cope up with postoperative pain in the immediate post-operative period. However, there was not much difference when the patients have had almost complete recovery. There was no difference in the requirement of additional analgesia among the two groups. There is a positive effect of preoperative counseling on expected return to the activities of daily living and normal work schedule. However, no effect was observed on the activities in the immediate post-operative period. There is no difference in the overall satisfaction score among the two groups of patients. Thus this study concludes that there is a positive benefit as suggested by the results for pre-operative patient education. Although the difference in various parameters studied might not be significant over a long term basis, they definitely point towards the benefits of preoperative patient education.Keywords: patient education, post-operative pain, postoperative outcomes, patient satisfaction
Procedia PDF Downloads 3398900 Effect of Microstructure on Wear Resistance of Polycrystalline Diamond Composite Cutter of Bit
Authors: Fanyuan Shao, Wei Liu, Deli Gao
Abstract:
Polycrystalline diamond composite (PDC) cutter is made of diamond powder as raw material, cobalt metal or non-metallic elements as a binder, mixed with WC cemented carbide matrix assembly, through high temperature and high-pressure sintering. PDC bits with PDC cutters are widely used in oil and gas drilling because of their high hardness, good wear resistance and excellent impact toughness. And PDC cutter is the main cutting tool of bit, which seriously affects the service of the PDC bit. The wear resistance of the PDC cutter is measured by cutting granite with a vertical turret lathe (VTL). This experiment can achieve long-distance cutting to obtain the relationship between the wear resistance of the PDC cutter and cutting distance, which is more closely to the real drilling situation. Load cell and 3D optical profiler were used to obtain the value of cutting forces and wear area, respectively, which can also characterize the damage and wear of the PDC cutter. PDC cutters were cut via electrical discharge machining (EDM) and then flattened and polished. A scanning electron microscope (SEM) was used to observe the distribution of binder cobalt and the size of diamond particles in a diamond PDC cutter. The cutting experimental results show that the wear area of the PDC cutter has a good linear relationship with the cutting distance. Simultaneously, the larger the wear area is and the greater the cutting forces are required to maintain the same cutting state. The size and distribution of diamond particles in the polycrystalline diamond layer have a great influence on the wear resistance of the diamond layer. And PDC cutter with fine diamond grains shows more wear resistance than that with coarse grains. The deep leaching process is helpful to reduce the effect of binder cobalt on the wear resistance of the polycrystalline diamond layer. The experimental study can provide an important basis for the application of PDC cutters in oil and gas drilling.Keywords: polycrystalline diamond compact, scanning electron microscope, wear resistance, cutting distance
Procedia PDF Downloads 1988899 BIM Data and Digital Twin Framework: Preserving the Past and Predicting the Future
Authors: Mazharuddin Syed Ahmed
Abstract:
This research presents a framework used to develop The Ara Polytechnic College of Architecture Studies building “Kahukura” which is Green Building certified. This framework integrates the development of a smart building digital twin by utilizing Building Information Modelling (BIM) and its BIM maturity levels, including Levels of Development (LOD), eight dimensions of BIM, Heritage-BIM (H-BIM) and Facility Management BIM (FM BIM). The research also outlines a structured approach to building performance analysis and integration with the circular economy, encapsulated within a five-level digital twin framework. Starting with Level 1, the Descriptive Twin provides a live, editable visual replica of the built asset, allowing for specific data inclusion and extraction. Advancing to Level 2, the Informative Twin integrates operational and sensory data, enhancing data verification and system integration. At Level 3, the Predictive Twin utilizes operational data to generate insights and proactive management suggestions. Progressing to Level 4, the Comprehensive Twin simulates future scenarios, enabling robust “what-if” analyses. Finally, Level 5, the Autonomous Twin, represents the pinnacle of digital twin evolution, capable of learning and autonomously acting on behalf of users.Keywords: building information modelling, circular economy integration, digital twin, predictive analytics
Procedia PDF Downloads 438898 Mechanical Behavior of Laminated Glass Cylindrical Shell with Hinged Free Boundary Conditions
Authors: Ebru Dural, M. Zulfu Asık
Abstract:
Laminated glass is a kind of safety glass, which is made by 'sandwiching' two glass sheets and a polyvinyl butyral (PVB) interlayer in between them. When the glass is broken, the interlayer in between the glass sheets can stick them together. Because of this property, the hazards of sharp projectiles during natural and man-made disasters reduces. They can be widely applied in building, architecture, automotive, transport industries. Laminated glass can easily undergo large displacements even under their own weight. In order to explain their true behavior, they should be analyzed by using large deflection theory to represent nonlinear behavior. In this study, a nonlinear mathematical model is developed for the analysis of laminated glass cylindrical shell which is free in radial directions and restrained in axial directions. The results will be verified by using the results of the experiment, carried out on laminated glass cylindrical shells. The behavior of laminated composite cylindrical shell can be represented by five partial differential equations. Four of the five equations are used to represent axial displacements and radial displacements and the fifth one for the transverse deflection of the unit. Governing partial differential equations are derived by employing variational principles and minimum potential energy concept. Finite difference method is employed to solve the coupled differential equations. First, they are converted into a system of matrix equations and then iterative procedure is employed. Iterative procedure is necessary since equations are coupled. Problems occurred in getting convergent sequence generated by the employed procedure are overcome by employing variable underrelaxation factor. The procedure developed to solve the differential equations provides not only less storage but also less calculation time, which is a substantial advantage in computational mechanics problems.Keywords: laminated glass, mathematical model, nonlinear behavior, PVB
Procedia PDF Downloads 3198897 Data Management System for Environmental Remediation
Authors: Elizaveta Petelina, Anton Sizo
Abstract:
Environmental remediation projects deal with a wide spectrum of data, including data collected during site assessment, execution of remediation activities, and environmental monitoring. Therefore, an appropriate data management is required as a key factor for well-grounded decision making. The Environmental Data Management System (EDMS) was developed to address all necessary data management aspects, including efficient data handling and data interoperability, access to historical and current data, spatial and temporal analysis, 2D and 3D data visualization, mapping, and data sharing. The system focuses on support of well-grounded decision making in relation to required mitigation measures and assessment of remediation success. The EDMS is a combination of enterprise and desktop level data management and Geographic Information System (GIS) tools assembled to assist to environmental remediation, project planning, and evaluation, and environmental monitoring of mine sites. EDMS consists of seven main components: a Geodatabase that contains spatial database to store and query spatially distributed data; a GIS and Web GIS component that combines desktop and server-based GIS solutions; a Field Data Collection component that contains tools for field work; a Quality Assurance (QA)/Quality Control (QC) component that combines operational procedures for QA and measures for QC; Data Import and Export component that includes tools and templates to support project data flow; a Lab Data component that provides connection between EDMS and laboratory information management systems; and a Reporting component that includes server-based services for real-time report generation. The EDMS has been successfully implemented for the Project CLEANS (Clean-up of Abandoned Northern Mines). Project CLEANS is a multi-year, multimillion-dollar project aimed at assessing and reclaiming 37 uranium mine sites in northern Saskatchewan, Canada. The EDMS has effectively facilitated integrated decision-making for CLEANS project managers and transparency amongst stakeholders.Keywords: data management, environmental remediation, geographic information system, GIS, decision making
Procedia PDF Downloads 1618896 Geographical Information System for Sustainable Management of Water Resources
Authors: Vakhtang Geladze, Nana Bolashvili, Nino Machavariani, Tamazi Karalashvili, Nino Chikhradze, Davit Kartvelishvili
Abstract:
Fresh water deficit is one of the most important global problems today. In the countries with scarce water resources, they often become a reason of armed conflicts. The peaceful settlement of relations connected with management and water consumption issues within and beyond the frontiers of the country is an important guarantee of the region stability. The said problem is urgent in Georgia as well because of its water objects are located at the borders and the transit run-off that is 12% of the total one. Fresh water resources are the major natural resources of Georgia. Despite of this, water supply of population at its Eastern part is an acute issue. Southeastern part of the country has been selected to carry out the research. This region is notable for deficiency of water resources in the country. The region tends to desertification which aggravates fresh water problem even more and presumably may lead to migration of local population from the area. The purpose of study was creation geographical information system (GIS) of water resources. GIS contains almost all layers of different content (water resources, springs, channels, hydrological stations, population water supply, etc.). The results of work provide an opportunity to identify the resource potential of the mentioned region, control and manage it, carry out monitoring and plan regional economy.Keywords: desertification, GIS, irrigation, water resources
Procedia PDF Downloads 6938895 Influence of Hygro-Thermo-Mechanical Loading on Buckling and Vibrational Behavior of FG-CNT Composite Beam with Temperature Dependent Characteristics
Authors: Puneet Kumar, Jonnalagadda Srinivas
Abstract:
The authors report here vibration and buckling analysis of functionally graded carbon nanotube-polymer composite (FG-CNTPC) beams under hygro-thermo-mechanical environments using higher order shear deformation theory. The material properties of CNT and polymer matrix are often affected by temperature and moisture content. A micromechanical model with agglomeration effect is employed to compute the elastic, thermal and moisture properties of the composite beam. The governing differential equation of FG-CNTRPC beam is developed using higher-order shear deformation theory to account shear deformation effects. The elastic, thermal and hygroscopic strain terms are derived from variational principles. Moreover, thermal and hygroscopic loads are determined by considering uniform, linear and sinusoidal variation of temperature and moisture content through the thickness. Differential equations of motion are formulated as an eigenvalue problem using appropriate displacement fields and solved by using finite element modeling. The obtained results of natural frequencies and critical buckling loads show a good agreement with published data. The numerical illustrations elaborate the dynamic as well as buckling behavior under uniaxial load for different environmental conditions, boundary conditions and volume fraction distribution profile, beam slenderness ratio. Further, comparisons are shown at different boundary conditions, temperatures, degree of moisture content, volume fraction as well as agglomeration of CNTs, slenderness ratio of beam for different shear deformation theories.Keywords: hygrothermal effect, free vibration, buckling load, agglomeration
Procedia PDF Downloads 264