Search results for: statistical machine translation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7116

Search results for: statistical machine translation

306 Greener Minds: Understanding Students' Perceptions of Environmental Sustainability in Higher Education, Sultan Qaboos University

Authors: Aisha Alshdefat, Lina Shakman

Abstract:

Objective: With environmental sustainability (ES) emerging as a critical concern due to its global impact, higher education institutions play a vital role in promoting ES through curricula and campus operations. This study examines the perceptions, attitudes, and behaviors related to ES among students at Sultan Qaboos University, aiming to identify areas for improved integration of sustainability practices in higher education. Design: A descriptive cross-sectional study, conducted via an online questionnaire, examines perceptions and attitudes toward environmental sustainability among students at Sultan Qaboos University, Muscat, Oman. The survey instrument employs a 5-point Likert scale to assess six key areas: awareness, concern, attitude, willingness to participate, current behaviors, and recommendations for enhancing campus sustainability initiatives. A convenience sample of 200 students was initially targeted, with 157 students ultimately responding between September and November 2024. Eligible participants included Undergraduate and graduate students who consented after being fully informed of the study objectives and design were included, while those who withdrew or refused participation were excluded. Following ethical approval, data collection was carried out through Google Forms. SPSS Version 23 was used for descriptive and inferential analyses, including Pearson’s correlation, chi-square, and Fisher's exact test, to explore associations among key variables. Findings: Preliminary analysis indicates that 68% of participants are familiar with core environmental sustainability (ES) concepts, including the Sustainable Development Goals (SDGs), and express high concern regarding environmental issues. However, only 47% report active involvement in campus-led ES initiatives, underscoring an engagement gap. Over 70% of respondents believe that sustainability should be prioritized as a university policy, and 62% expressed willingness to participate in additional ES-related programs. Despite this interest, 58% advocated for more sustainability-focused courses in their curriculum, suggesting current offerings are insufficient. Statistical analysis revealed a significant positive correlation between ES awareness and willingness to engage in sustainable practices (p < 0.05). These findings highlight the need for expanded institutional efforts, including targeted programs and curriculum integration, to cultivate a more sustainability-centered culture among students. Conclusion: The results emphasize that while students demonstrate a strong foundational awareness of ES, greater institutional support is essential to transform this awareness into active engagement. More comprehensive integration of sustainability within academic programs and campus life could substantially enhance students’ involvement and commitment to environmental stewardship.

Keywords: environmental sustainability, higher education, students, perceptions, Sultan Qaboos University.

Procedia PDF Downloads 11
305 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping

Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo

Abstract:

Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.

Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping

Procedia PDF Downloads 70
304 Deficient Multisensory Integration with Concomitant Resting-State Connectivity in Adult Attention Deficit/Hyperactivity Disorder (ADHD)

Authors: Marcel Schulze, Behrem Aslan, Silke Lux, Alexandra Philipsen

Abstract:

Objective: Patients with Attention Deficit/Hyperactivity Disorder (ADHD) often report that they are being flooded by sensory impressions. Studies investigating sensory processing show hypersensitivity for sensory inputs across the senses in children and adults with ADHD. Especially the auditory modality is affected by deficient acoustical inhibition and modulation of signals. While studying unimodal signal-processing is relevant and well-suited in a controlled laboratory environment, everyday life situations occur multimodal. A complex interplay of the senses is necessary to form a unified percept. In order to achieve this, the unimodal sensory modalities are bound together in a process called multisensory integration (MI). In the current study we investigate MI in an adult ADHD sample using the McGurk-effect – a well-known illusion where incongruent speech like phonemes lead in case of successful integration to a new perceived phoneme via late top-down attentional allocation . In ADHD neuronal dysregulation at rest e.g., aberrant within or between network functional connectivity may also account for difficulties in integrating across the senses. Therefore, the current study includes resting-state functional connectivity to investigate a possible relation of deficient network connectivity and the ability of stimulus integration. Method: Twenty-five ADHD patients (6 females, age: 30.08 (SD:9,3) years) and twenty-four healthy controls (9 females; age: 26.88 (SD: 6.3) years) were recruited. MI was examined using the McGurk effect, where - in case of successful MI - incongruent speech-like phonemes between visual and auditory modality are leading to a perception of a new phoneme. Mann-Whitney-U test was applied to assess statistical differences between groups. Echo-planar imaging-resting-state functional MRI was acquired on a 3.0 Tesla Siemens Magnetom MR scanner. A seed-to-voxel analysis was realized using the CONN toolbox. Results: Susceptibility to McGurk was significantly lowered for ADHD patients (ADHDMdn:5.83%, ControlsMdn:44.2%, U= 160.5, p=0.022, r=-0.34). When ADHD patients integrated phonemes, reaction times were significantly longer (ADHDMdn:1260ms, ControlsMdn:582ms, U=41.0, p<.000, r= -0.56). In functional connectivity medio temporal gyrus (seed) was negatively associated with primary auditory cortex, inferior frontal gyrus, precentral gyrus, and fusiform gyrus. Conclusion: MI seems to be deficient for ADHD patients for stimuli that need top-down attentional allocation. This finding is supported by stronger functional connectivity from unimodal sensory areas to polymodal, MI convergence zones for complex stimuli in ADHD patients.

Keywords: attention-deficit hyperactivity disorder, audiovisual integration, McGurk-effect, resting-state functional connectivity

Procedia PDF Downloads 127
303 The ‘Fun, Move, Play’ Project: Qualitative and Quantitative Findings from Irish Primary School Children (6-8 Years), Parents and Teachers

Authors: Jemma McGourty, Brid Delahunt, Fiona Hackett, Sharon Courtney, Richard English, Graham Russell, Sinéad O’Connor

Abstract:

Fundamental Movement Skills (FMS) mastery is considered essential for children’s ongoing, meaningful engagement in Physical Activity (PA). There has been a dearth of Irish research on baseline FMS and their development by means of intervention in young primary school children. In addition, as children’s participation in PA is heavily influenced by both parents and teachers, it is imperative to understand their attitudes and perceptions towards PA participation and its’ promotion in children. The ‘Fun, Move, Play’ Project investigated the effect of a 6-week play based PA intervention on primary school children’s (aged 6-8 years) FMS while also exploring the attitudes and perceptions of their parents and teachers towards PA participation. The FMS intervention utilised a pre-post quasi-experimental design to determine the effect of a 6-week play based PA intervention (devised from the iCoach Kids Programme) on 176 primary school children’s FMS (N = 176: 90 girls and 86 boys; M = 7.2 years; SD = 0.48). Objective measures of 7 FMS (run, skip, vertical jump, static balance, stationary dribble, catch, kick) were made using a combination of the TGMD2 and Get Skilled, Get Active resources. One hundred parents (87 mothers; 13 fathers; M=36 years; SD=5.45) and 90 teachers (67 females; 23 males) completed surveys investigating their attitudes and perceptions towards PA participation. In addition, 19 of these parents and 9 of these teachers participated in semi-structured qualitative interviews to explore, in more depth, their views and perceptions of PA participation. Both the FMS data set and survey responses were analysed using SPSS version 23, using appropriate statistical analysis. A thematic analysis framework was used to analyse the qualitative findings. A significant improvement was observed in the children’s overall FMS score pre-post intervention (t = 16.67; df = 175; p < 0.001), while there were also significant improvements in each of the seven individual FMS measured in the children, pre-post intervention. Findings from the parent surveys and interviews indicated that parents had positive attitudes towards PA, viewed it as important and supported their child’s PA participation. However, a lack of knowledge regarding the amount and intensity of PA that children should participate in emerged as a recurrent finding. Also, there was a significant positive correlation between the PA levels of parents’ and their children (r = .41; n = 100; p < .001). Arising from the teachers’ surveys and interviews was a positive attitude towards PA and the impact that it has on a child’s health and well-being. They also reported feeling more confident teaching certain aspects of the PE curriculum (games and sports) compared to others (gymnastics, dance), where they appreciate working with specialist practitioners. Conclusion: A short-term PA intervention has a positive effect on children’s FMS. While parents are supportive of their child’s PA participation, there is a knowledge gap regarding National PA guidelines for children. Teachers appreciate the importance of PA in children, but face a number of challenges in its implementation and promotion.

Keywords: fundamental movement skills, parents attitudes to physical activity, short-term intervention, teachers attitudes to physical activity

Procedia PDF Downloads 179
302 Strategies for Synchronizing Chocolate Conching Data Using Dynamic Time Warping

Authors: Fernanda A. P. Peres, Thiago N. Peres, Flavio S. Fogliatto, Michel J. Anzanello

Abstract:

Batch processes are widely used in food industry and have an important role in the production of high added value products, such as chocolate. Process performance is usually described by variables that are monitored as the batch progresses. Data arising from these processes are likely to display a strong correlation-autocorrelation structure, and are usually monitored using control charts based on multiway principal components analysis (MPCA). Process control of a new batch is carried out comparing the trajectories of its relevant process variables with those in a reference set of batches that yielded products within specifications; it is clear that proper determination of the reference set is key for the success of a correct signalization of non-conforming batches in such quality control schemes. In chocolate manufacturing, misclassifications of non-conforming batches in the conching phase may lead to significant financial losses. In such context, the accuracy of process control grows in relevance. In addition to that, the main assumption in MPCA-based monitoring strategies is that all batches are synchronized in duration, both the new batch being monitored and those in the reference set. Such assumption is often not satisfied in chocolate manufacturing process. As a consequence, traditional techniques as MPCA-based charts are not suitable for process control and monitoring. To address that issue, the objective of this work is to compare the performance of three dynamic time warping (DTW) methods in the alignment and synchronization of chocolate conching process variables’ trajectories, aimed at properly determining the reference distribution for multivariate statistical process control. The power of classification of batches in two categories (conforming and non-conforming) was evaluated using the k-nearest neighbor (KNN) algorithm. Real data from a milk chocolate conching process was collected and the following variables were monitored over time: frequency of soybean lecithin dosage, rotation speed of the shovels, current of the main motor of the conche, and chocolate temperature. A set of 62 batches with durations between 495 and 1,170 minutes was considered; 53% of the batches were known to be conforming based on lab test results and experts’ evaluations. Results showed that all three DTW methods tested were able to align and synchronize the conching dataset. However, synchronized datasets obtained from these methods performed differently when inputted in the KNN classification algorithm. Kassidas, MacGregor and Taylor’s (named KMT) method was deemed the best DTW method for aligning and synchronizing a milk chocolate conching dataset, presenting 93.7% accuracy, 97.2% sensitivity and 90.3% specificity in batch classification, being considered the best option to determine the reference set for the milk chocolate dataset. Such method was recommended due to the lowest number of iterations required to achieve convergence and highest average accuracy in the testing portion using the KNN classification technique.

Keywords: batch process monitoring, chocolate conching, dynamic time warping, reference set distribution, variable duration

Procedia PDF Downloads 167
301 Effects of Macro and Micro Nutrients on Growth and Yield Performances of Tomato (Lycopersicon esculentum MILL.)

Authors: K. M. S. Weerasinghe, A. H. K. Balasooriya, S. L. Ransingha, G. D. Krishantha, R. S. Brhakamanagae, L. C. Wijethilke

Abstract:

Tomato (Lycopersicon esculentum Mill.) is a major horticultural crop with an estimated global production of over 120 million metric tons and ranks first as a processing crop. The average tomato productivity in Sri Lanka (11 metric tons/ha) is much lower than the world average (24 metric tons/ha).To meet the tomato demand for the increasing population the productivity has to be intensified through the agronomic-techniques. Nutrition is one of the main factors which govern the growth and yield of tomato and the main nutrient source soil affect the plant growth and quality of the produce. Continuous cropping, improper fertilizer usage etc., cause widespread nutrient deficiencies. Therefore synthetic fertilizers and organic manures were introduced to enhance plant growth and maximize the crop yields. In this study, effects of macro and micronutrient supplementations on improvement of growth and yield of tomato were investigated. Selected tomato variety is Maheshi and plants were grown in Regional Agricultural and Research Centre Makadura under the Department of Agriculture recommended (DOA) macro nutrients and various combination of Ontario recommended dosages of secondary and micro fertilizer supplementations. There were six treatments in this experiment and each treatment was replicated in three times and each replicate consisted of six plants. Other than the DOA recommendation, five combinations of Ontario recommended dosage of secondary and micronutrients for tomato were also used as treatments. The treatments were arranged in a Randomized Complete Block Design. All cultural practices were carried out according to the DOA recommendations. The mean data was subjected to the statistical analysis using SAS package and mean separation (Duncan’s Multiple Range test at 5% probability level) procedures. Secondary and micronutrients containing treatments significantly increased most of the growth parameters. Plant height, plant girth, number of leaves, leaf area index etc. Fruits harvested from pots amended with macro, secondary and micronutrients performed best in terms of total yield; yield quality; to pots amended with DOA recommended dosage of fertilizer for tomato. It could be due to the application of all essential macro and micro nutrients that rise in photosynthetic activity, efficient translocation and utilization of photosynthates causing rapid cell elongation and cell division in actively growing region of the plant leading to stimulation of growth and yield were caused. The experiment revealed and highlighted the requirements of essential macro, secondary and micro nutrient fertilizer supplementations for tomato farming. The study indicated that, macro and micro nutrient supplementation practices can influence growth and yield performances of tomato fruits and it is a promising approach to get potential tomato yields.

Keywords: macro and micronutrients, tomato, SAS package, photosynthates

Procedia PDF Downloads 475
300 Artificial Intelligence for Traffic Signal Control and Data Collection

Authors: Reggie Chandra

Abstract:

Trafficaccidents and traffic signal optimization are correlated. However, 70-90% of the traffic signals across the USA are not synchronized. The reason behind that is insufficient resources to create and implement timing plans. In this work, we will discuss the use of a breakthrough Artificial Intelligence (AI) technology to optimize traffic flow and collect 24/7/365 accurate traffic data using a vehicle detection system. We will discuss what are recent advances in Artificial Intelligence technology, how does AI work in vehicles, pedestrians, and bike data collection, creating timing plans, and what is the best workflow for that. Apart from that, this paper will showcase how Artificial Intelligence makes signal timing affordable. We will introduce a technology that uses Convolutional Neural Networks (CNN) and deep learning algorithms to detect, collect data, develop timing plans and deploy them in the field. Convolutional Neural Networks are a class of deep learning networks inspired by the biological processes in the visual cortex. A neural net is modeled after the human brain. It consists of millions of densely connected processing nodes. It is a form of machine learning where the neural net learns to recognize vehicles through training - which is called Deep Learning. The well-trained algorithm overcomes most of the issues faced by other detection methods and provides nearly 100% traffic data accuracy. Through this continuous learning-based method, we can constantly update traffic patterns, generate an unlimited number of timing plans and thus improve vehicle flow. Convolutional Neural Networks not only outperform other detection algorithms but also, in cases such as classifying objects into fine-grained categories, outperform humans. Safety is of primary importance to traffic professionals, but they don't have the studies or data to support their decisions. Currently, one-third of transportation agencies do not collect pedestrian and bike data. We will discuss how the use of Artificial Intelligence for data collection can help reduce pedestrian fatalities and enhance the safety of all vulnerable road users. Moreover, it provides traffic engineers with tools that allow them to unleash their potential, instead of dealing with constant complaints, a snapshot of limited handpicked data, dealing with multiple systems requiring additional work for adaptation. The methodologies used and proposed in the research contain a camera model identification method based on deep Convolutional Neural Networks. The proposed application was evaluated on our data sets acquired through a variety of daily real-world road conditions and compared with the performance of the commonly used methods requiring data collection by counting, evaluating, and adapting it, and running it through well-established algorithms, and then deploying it to the field. This work explores themes such as how technologies powered by Artificial Intelligence can benefit your community and how to translate the complex and often overwhelming benefits into a language accessible to elected officials, community leaders, and the public. Exploring such topics empowers citizens with insider knowledge about the potential of better traffic technology to save lives and improve communities. The synergies that Artificial Intelligence brings to traffic signal control and data collection are unsurpassed.

Keywords: artificial intelligence, convolutional neural networks, data collection, signal control, traffic signal

Procedia PDF Downloads 169
299 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers

Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala

Abstract:

The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.

Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification

Procedia PDF Downloads 163
298 New Media and the Personal Vote in General Elections: A Comparison of Constituency Level Candidates in the United Kingdom and Japan

Authors: Sean Vincent

Abstract:

Within the academic community, there is a consensus that political parties in established liberal democracies are facing a myriad of organisational challenges as a result of falling membership, weakening links to grass-roots support and rising voter apathy. During the same period of party decline and growing public disengagement political parties have become increasingly professionalised. The professionalisation of political parties owes much to changes in technology, with television becoming the dominant medium for political communication. In recent years, however, it has become clear that a new medium of communication is becoming utilised by political parties and candidates – New Media. New Media, a term hard to define but related to internet based communication, offers a potential revolution in political communication. It can be utilised by anyone with access to the internet and its most widely used platforms of communication such as Facebook and Twitter, are free to use. The advent of Web 2.0 has dramatically changed what can be done with the Internet. Websites now allow candidates at the constituency level to fundraise, organise and set out personalised policies. Social media allows them to communicate with supporters and potential voters practically cost-free. As such candidate dependency on the national party for resources and image now lies open to debate. Arguing that greater candidate independence may be a natural next step in light of the contemporary challenges faced by parties, this paper examines how New Media is being used by candidates at the constituency level to increase their personal vote. The paper will present findings from research carried out during two elections – the Japanese Lower House election of 2014 and the UK general election of 2015. During these elections a sample of candidates, totalling 150 candidates, from the three biggest parties in each country were selected and their new media output, specifically candidate websites, Twitter and Facebook output subjected to content analysis. The analysis examines how candidates are using new media to both become more functionally, through fundraising and volunteer mobilisation and politically, through the promotion of personal/local policies, independent from the national party. In order to validate the results of content analysis this paper will also present evidence from interviews carried out with 17 candidates that stood in the 2014 Japanese Lower House election or 2015 UK general election. With a combination of statistical analysis and interviews, several conclusions can be made about the use of New Media at constituency level. The findings show not just a clear difference in the way candidates from each country are using New Media but also differences within countries based upon the particular circumstances of each constituency. While it has not yet replaced traditional methods of fundraising and activist mobilisation, New Media is also becoming increasingly important in campaign organisation and the general consensus amongst candidates is that its importance will continue to grow along as politics in both countries becomes more diffuse.

Keywords: political campaigns, elections, new media, political communication

Procedia PDF Downloads 226
297 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis

Authors: Serhat Tüzün, Tufan Demirel

Abstract:

Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.

Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review

Procedia PDF Downloads 279
296 Linguistic Analysis of Borderline Personality Disorder: Using Language to Predict Maladaptive Thoughts and Behaviours

Authors: Charlotte Entwistle, Ryan Boyd

Abstract:

Recent developments in information retrieval techniques and natural language processing have allowed for greater exploration of psychological and social processes. Linguistic analysis methods for understanding behaviour have provided useful insights within the field of mental health. One area within mental health that has received little attention though, is borderline personality disorder (BPD). BPD is a common mental health disorder characterised by instability of interpersonal relationships, self-image and affect. It also manifests through maladaptive behaviours, such as impulsivity and self-harm. Examination of language patterns associated with BPD could allow for a greater understanding of the disorder and its links to maladaptive thoughts and behaviours. Language analysis methods could also be used in a predictive way, such as by identifying indicators of BPD or predicting maladaptive thoughts, emotions and behaviours. Additionally, associations that are uncovered between language and maladaptive thoughts and behaviours could then be applied at a more general level. This study explores linguistic characteristics of BPD, and their links to maladaptive thoughts and behaviours, through the analysis of social media data. Data were collected from a large corpus of posts from the publicly available social media platform Reddit, namely, from the ‘r/BPD’ subreddit whereby people identify as having BPD. Data were collected using the Python Reddit API Wrapper and included all users which had posted within the BPD subreddit. All posts were manually inspected to ensure that they were not posted by someone who clearly did not have BPD, such as people posting about a loved one with BPD. These users were then tracked across all other subreddits of which they had posted in and data from these subreddits were also collected. Additionally, data were collected from a random control group of Reddit users. Disorder-relevant behaviours, such as self-harming or aggression-related behaviours, outlined within Reddit posts were coded to by expert raters. All posts and comments were aggregated by user and split by subreddit. Language data were then analysed using the Linguistic Inquiry and Word Count (LIWC) 2015 software. LIWC is a text analysis program that identifies and categorises words based on linguistic and paralinguistic dimensions, psychological constructs and personal concern categories. Statistical analyses of linguistic features could then be conducted. Findings revealed distinct linguistic features associated with BPD, based on Reddit posts, which differentiated these users from a control group. Language patterns were also found to be associated with the occurrence of maladaptive thoughts and behaviours. Thus, this study demonstrates that there are indeed linguistic markers of BPD present on social media. It also implies that language could be predictive of maladaptive thoughts and behaviours associated with BPD. These findings are of importance as they suggest potential for clinical interventions to be provided based on the language of people with BPD to try to reduce the likelihood of maladaptive thoughts and behaviours occurring. For example, by social media tracking or engaging people with BPD in expressive writing therapy. Overall, this study has provided a greater understanding of the disorder and how it manifests through language and behaviour.

Keywords: behaviour analysis, borderline personality disorder, natural language processing, social media data

Procedia PDF Downloads 349
295 The High Potential and the Little Use of Brazilian Class Actions for Prevention and Penalization Due to Workplace Accidents in Brazil

Authors: Sandra Regina Cavalcante, Rodolfo A. G. Vilela

Abstract:

Introduction: Work accidents and occupational diseases are a big problem for public health around the world and the main health problem of workers with high social and economic costs. Brazil has shown progress over the last years, with the development of the regulatory system to improve safety and quality of life in the workplace. However, the situation is far from acceptable, because the occurrences remain high and there is a great gap between legislation and reality, generated by the low level of voluntary compliance with the law. Brazilian laws provide procedural legal instruments for both, to compensate the damage caused to the worker's health and to prevent future injuries. In the Judiciary, the prevention idea is in the collective action, effected through Brazilian Class Actions. Inhibitory guardianships may impose both, improvements to the working environment, as well as determine the interruption of activity or a ban on the machine that put workers at risk. Both the Labor Prosecution and trade unions have to stand to promote this type of action, providing payment of compensation for collective moral damage. Objectives: To verify how class actions (known as ‘public civil actions’), regulated in Brazilian legal system to protect diffuse, collective and homogeneous rights, are being used to protect workers' health and safety. Methods: The author identified and evaluated decisions of Brazilian Superior Court of Labor involving collective actions and work accidents. The timeframe chosen was December 2015. The online jurisprudence database was consulted in page available for public consultation on the court website. The categorization of the data was made considering the result (court application was rejected or accepted), the request type, the amount of compensation and the author of the cause, besides knowing the reasoning used by the judges. Results: The High Court issued 21,948 decisions in December 2015, with 1448 judgments (6.6%) about work accidents and only 20 (0.09%) on collective action. After analyzing these 20 decisions, it was found that the judgments granted compensation for collective moral damage (85%) and/or obligation to make, that is, changes to improve prevention and safety (71%). The processes have been filed mainly by the Labor Prosecutor (83%), and also appeared lawsuits filed by unions (17%). The compensation for collective moral damage had average of 250,000 reais (about US$65,000), but it should be noted that there is a great range of values found, also are several situations repaired by this compensation. This is the last instance resource for this kind of lawsuit and all decisions were well founded and received partially the request made for working environment protection. Conclusions: When triggered, the labor court system provides the requested collective protection in class action. The values of convictions arbitrated in collective actions are significant and indicate that it creates social and economic repercussions, stimulating employers to improve the working environment conditions of their companies. It is necessary to intensify the use of collective actions, however, because they are more efficient for prevention than reparatory individual lawsuits, but it has been underutilized, mainly by Unions.

Keywords: Brazilian Class Action, collective action, work accident penalization, workplace accident prevention, workplace protection law

Procedia PDF Downloads 274
294 Exploring the Energy Saving Benefits of Solar Power and Hot Water Systems: A Case Study of a Hospital in Central Taiwan

Authors: Ming-Chan Chung, Wen-Ming Huang, Yi-Chu Liu, Li-Hui Yang, Ming-Jyh Chen

Abstract:

introduction: Hospital buildings require considerable energy, including air conditioning, lighting, elevators, heating, and medical equipment. Energy consumption in hospitals is expected to increase significantly due to innovative equipment and continuous development plans. Consequently, the environment and climate will be adversely affected. Hospitals should therefore consider transforming from their traditional role of saving lives to being at the forefront of global efforts to reduce carbon dioxide emissions. As healthcare providers, it is our responsibility to provide a high-quality environment while using as little energy as possible. Purpose / Methods: Compare the energy-saving benefits of solar photovoltaic systems and solar hot water systems. The proportion of electricity consumption effectively reduced after the installation of solar photovoltaic systems. To comprehensively assess the potential benefits of utilizing solar energy for both photovoltaic (PV) and solar thermal applications in hospitals, a solar PV system was installed covering a total area of 28.95 square meters in 2021. Approval was obtained from the Taiwan Power Company to integrate the system into the hospital's electrical infrastructure for self-use. To measure the performance of the system, a dedicated meter was installed to track monthly power generation, which was then converted into area output using an electric energy conversion factor. This research aims to compare the energy efficiency of solar PV systems and solar thermal systems. Results: Using the conversion formula between electrical and thermal energy, we can compare the energy output of solar heating systems and solar photovoltaic systems. The comparative study draws upon data from February 2021 to February 2023, wherein the solar heating system generated an average of 2.54 kWh of energy per panel per day, while the solar photovoltaic system produced 1.17 kWh of energy per panel per day, resulting in a difference of approximately 2.17 times between the two systems. Conclusions: After conducting statistical analysis and comparisons, it was found that solar thermal heating systems offer higher energy and greater benefits than solar photovoltaic systems. Furthermore, an examination of literature data and simulations of the energy and economic benefits of solar thermal water systems and solar-assisted heat pump systems revealed that solar thermal water systems have higher energy density values, shorter recovery periods, and lower power consumption than solar-assisted heat pump systems. Through monitoring and empirical research in this study, it has been concluded that a heat pump-assisted solar thermal water system represents a relatively superior energy-saving and carbon-reducing solution for medical institutions. Not only can this system help reduce overall electricity consumption and the use of fossil fuels, but it can also provide more effective heating solutions.

Keywords: sustainable development, energy conservation, carbon reduction, renewable energy, heat pump system

Procedia PDF Downloads 81
293 The Different Effects of Mindfulness-Based Relapse Prevention Group Therapy on QEEG Measures in Various Severity Substance Use Disorder Involuntary Clients

Authors: Yu-Chi Liao, Nai-Wen Guo, Chun‑Hung Lee, Yung-Chin Lu, Cheng-Hung Ko

Abstract:

Objective: The incidence of behavioral addictions, especially substance use disorders (SUDs), is gradually be taken seriously with various physical health problems. Mindfulness-based relapse prevention (MBRP) is a treatment option for promoting long-term health behavior change in recent years. MBRP is a structured protocol that integrates formal meditation practices with the cognitive-behavioral approach of relapse prevention treatment by teaching participants not to engage in reappraisal or savoring techniques. However, considering SUDs as a complex brain disease, questionnaires and symptom evaluation are not sufficient to evaluate the effect of MBRP. Neurophysiological biomarkers such as quantitative electroencephalogram (QEEG) may improve accurately represent the curative effects. This study attempted to find out the neurophysiological indicator of MBRP in various severity SUD involuntary clients. Participants and Methods: Thirteen participants (all males) completed 8-week mindfulness-based treatment provided by trained, licensed clinical psychologists. The behavioral data were from the Severity of Dependence Scale (SDS) and Negative Mood Regulation Scale (NMR) before and afterMBRP treatment. The QEEG data were simultaneously recorded with executive attention tasks, called comprehensive nonverbal attention test(CNAT). The two-way repeated-measures (treatment * severity) ANOVA and independent t-test were used for statistical analysis. Results: Thirteen participants regrouped into high substance dependence (HS) and low substance dependence (LS) by SDS cut-off. The HS group showed more SDS total score and lower gamma wave in the Go/No Go task of CNAT at pretest. Both groups showed the main effect that they had a lower frontal theta/beta ratio (TBR) during the simple reaction time task of CNAT. The main effect showed that the delay errors of CNAT were lower after MBRP. There was no other difference in CNAT between groups. However, after MBRP, compared to LS, the HS group have resonant progress in improving SDS and NMR scores. The neurophysiological index, the frontal TBR of the HS during the Go/No Go task of CNATdecreased than that of the LS group. Otherwise, the LS group’s gamma wave was a significant reduction on the Go/No Go task of CNAT. Conclusion: The QEEG data supports the MBRP can restore the prefrontal function of involuntary addicts and lower their errors in executive attention tasks. However, the improvement of MBRPfor the addict with high addiction severity is significantly more than that with low severity, including QEEG’s indicators and negative emotion regulation. Future directions include investigating the reasons for differences in efficacy among different severity of the addiction.

Keywords: mindfulness, involuntary clients, QEEG, emotion regulation

Procedia PDF Downloads 147
292 Heat Transfer Modeling of 'Carabao' Mango (Mangifera indica L.) during Postharvest Hot Water Treatments

Authors: Hazel James P. Agngarayngay, Arnold R. Elepaño

Abstract:

Mango is the third most important export fruit in the Philippines. Despite the expanding mango trade in world market, problems on postharvest losses caused by pests and diseases are still prevalent. Many disease control and pest disinfestation methods have been studied and adopted. Heat treatment is necessary to eliminate pests and diseases to be able to pass the quarantine requirements of importing countries. During heat treatments, temperature and time are critical because fruits can easily be damaged by over-exposure to heat. Modeling the process enables researchers and engineers to study the behaviour of temperature distribution within the fruit over time. Understanding physical processes through modeling and simulation also saves time and resources because of reduced experimentation. This research aimed to simulate the heat transfer mechanism and predict the temperature distribution in ‘Carabao' mangoes during hot water treatment (HWT) and extended hot water treatment (EHWT). The simulation was performed in ANSYS CFD Software, using ANSYS CFX Solver. The simulation process involved model creation, mesh generation, defining the physics of the model, solving the problem, and visualizing the results. Boundary conditions consisted of the convective heat transfer coefficient and a constant free stream temperature. The three-dimensional energy equation for transient conditions was numerically solved to obtain heat flux and transient temperature values. The solver utilized finite volume method of discretization. To validate the simulation, actual data were obtained through experiment. The goodness of fit was evaluated using mean temperature difference (MTD). Also, t-test was used to detect significant differences between the data sets. Results showed that the simulations were able to estimate temperatures accurately with MTD of 0.50 and 0.69 °C for the HWT and EHWT, respectively. This indicates good agreement between the simulated and actual temperature values. The data included in the analysis were taken at different locations of probe punctures within the fruit. Moreover, t-tests showed no significant differences between the two data sets. Maximum heat fluxes obtained at the beginning of the treatments were 394.15 and 262.77 J.s-1 for HWT and EHWT, respectively. These values decreased abruptly at the first 10 seconds and gradual decrease was observed thereafter. Data on heat flux is necessary in the design of heaters. If underestimated, the heating component of a certain machine will not be able to provide enough heat required by certain operations. Otherwise, over-estimation will result in wasting of energy and resources. This study demonstrated that the simulation was able to estimate temperatures accurately. Thus, it can be used to evaluate the influence of various treatment conditions on the temperature-time history in mangoes. When combined with information on insect mortality and quality degradation kinetics, it could predict the efficacy of a particular treatment and guide appropriate selection of treatment conditions. The effect of various parameters on heat transfer rates, such as the boundary and initial conditions as well as the thermal properties of the material, can be systematically studied without performing experiments. Furthermore, the use of ANSYS software in modeling and simulation can be explored in modeling various systems and processes.

Keywords: heat transfer, heat treatment, mango, modeling and simulation

Procedia PDF Downloads 247
291 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 330
290 Artificial Neural Network and Satellite Derived Chlorophyll Indices for Estimation of Wheat Chlorophyll Content under Rainfed Condition

Authors: Muhammad Naveed Tahir, Wang Yingkuan, Huang Wenjiang, Raheel Osman

Abstract:

Numerous models used in prediction and decision-making process but most of them are linear in natural environment, and linear models reach their limitations with non-linearity in data. Therefore accurate estimation is difficult. Artificial Neural Networks (ANN) found extensive acceptance to address the modeling of the complex real world for the non-linear environment. ANN’s have more general and flexible functional forms than traditional statistical methods can effectively deal with. The link between information technology and agriculture will become more firm in the near future. Monitoring crop biophysical properties non-destructively can provide a rapid and accurate understanding of its response to various environmental influences. Crop chlorophyll content is an important indicator of crop health and therefore the estimation of crop yield. In recent years, remote sensing has been accepted as a robust tool for site-specific management by detecting crop parameters at both local and large scales. The present research combined the ANN model with satellite-derived chlorophyll indices from LANDSAT 8 imagery for predicting real-time wheat chlorophyll estimation. The cloud-free scenes of LANDSAT 8 were acquired (Feb-March 2016-17) at the same time when ground-truthing campaign was performed for chlorophyll estimation by using SPAD-502. Different vegetation indices were derived from LANDSAT 8 imagery using ERADAS Imagine (v.2014) software for chlorophyll determination. The vegetation indices were including Normalized Difference Vegetation Index (NDVI), Green Normalized Difference Vegetation Index (GNDVI), Chlorophyll Absorbed Ratio Index (CARI), Modified Chlorophyll Absorbed Ratio Index (MCARI) and Transformed Chlorophyll Absorbed Ratio index (TCARI). For ANN modeling, MATLAB and SPSS (ANN) tools were used. Multilayer Perceptron (MLP) in MATLAB provided very satisfactory results. For training purpose of MLP 61.7% of the data, for validation purpose 28.3% of data and rest 10% of data were used to evaluate and validate the ANN model results. For error evaluation, sum of squares error and relative error were used. ANN model summery showed that sum of squares error of 10.786, the average overall relative error was .099. The MCARI and NDVI were revealed to be more sensitive indices for assessing wheat chlorophyll content with the highest coefficient of determination R²=0.93 and 0.90 respectively. The results suggested that use of high spatial resolution satellite imagery for the retrieval of crop chlorophyll content by using ANN model provides accurate, reliable assessment of crop health status at a larger scale which can help in managing crop nutrition requirement in real time.

Keywords: ANN, chlorophyll content, chlorophyll indices, satellite images, wheat

Procedia PDF Downloads 146
289 Management of Dysphagia after Supra Glottic Laryngectomy

Authors: Premalatha B. S., Shenoy A. M.

Abstract:

Background: Rehabilitation of swallowing is as vital as speech in surgically treated head and neck cancer patients to maintain nutritional support, enhance wound healing and improve quality of life. Aspiration following supraglottic laryngectomy is very common, and rehabilitation of the same is crucial which requires involvement of speech therapist in close contact with head and neck surgeon. Objectives: To examine the functions of swallowing outcomes after intensive therapy in supraglottic laryngectomy. Materials: Thirty-nine supra glottic laryngectomees were participated in the study. Of them, 36 subjects were males and 3 were females, in the age range of 32-68 years. Eighteen subjects had undergone standard supra glottis laryngectomy (Group1) for supraglottic lesions where as 21 of them for extended supraglottic laryngectomy (Group 2) for base tongue and lateral pharyngeal wall lesion. Prior to surgery visit by speech pathologist was mandatory to assess the sutability for surgery and rehabilitation. Dysphagia rehabilitation started after decannulation of tracheostoma by focusing on orientation about anatomy, physiological variation before and after surgery, which was tailor made for each individual based on their type and extent of surgery. Supraglottic diet - Soft solid with supraglottic swallow method was advocated to prevent aspiration. The success of intervention was documented as number of sessions taken to swallow different food consistency and also percentage of subjects who achieved satisfactory swallow in terms of number of weeks in both the groups. Results: Statistical data was computed in two ways in both the groups 1) to calculate percentage (%) of subjects who swallowed satisfactorily in the time frame of less than 3 weeks to more than 6 weeks, 2) number of sessions taken to swallow without aspiration as far as food consistency was concerned. The study indicated that in group 1 subjects of standard supraglottic laryngectomy, 61% (n=11) of them were successfully rehabilitated but their swallowing normalcy was delayed by an average 29th post operative day (3-6 weeks). Thirty three percentages (33%) (n=6) of the subjects could swallow satisfactorily without aspiration even before 3 weeks and only 5 % (n=1) of the needed more than 6 weeks to achieve normal swallowing ability. Group 2 subjects of extended SGL only 47 %( n=10) of them could achieved satisfactory swallow by 3-6 weeks and 24% (n=5) of them of them achieved normal swallowing ability before 3 weeks. Around 4% (n=1) needed more than 6 weeks and as high as 24 % (n=5) of them continued to be supplemented with naso gastric feeding even after 8-10 months post operative as they exhibited severe aspiration. As far as type of food consistencies were concerned group 1 subject could able to swallow all types without aspiration much earlier than group 2 subjects. Group 1 needed only 8 swallowing therapy sessions for thickened soft solid and 15 sessions for liquids whereas group 2 required 14 sessions for soft solid and 17 sessions for liquids to achieve swallowing normalcy without aspiration. Conclusion: The study highlights the importance of dysphagia intervention in supraglottic laryngectomees by speech pathologist.

Keywords: dysphagia management, supraglotic diet, supraglottic laryngectomy, supraglottic swallow

Procedia PDF Downloads 231
288 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples

Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges

Abstract:

Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.

Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review

Procedia PDF Downloads 184
287 Comparative Effects of Resveratrol and Energy Restriction on Liver Fat Accumulation and Hepatic Fatty Acid Oxidation

Authors: Iñaki Milton-Laskibar, Leixuri Aguirre, Maria P. Portillo

Abstract:

Introduction: Energy restriction is an effective approach in preventing liver steatosis. However, due to social and economic reasons among others, compliance with this treatment protocol is often very poor, especially in the long term. Resveratrol, a natural polyphenolic compound that belongs to stilbene group, has been widely reported to imitate the effects of energy restriction. Objective: To analyze the effects of resveratrol under normoenergetic feeding conditions and under a mild energy restriction on liver fat accumulation and hepatic fatty acid oxidation. Methods: 36 male six-week-old rats were fed a high-fat high-sucrose diet for 6 weeks in order to induce steatosis. Then, rats were divided into four groups and fed a standard diet for 6 additional weeks: control group (C), resveratrol group (RSV, resveratrol 30 mg/kg/d), restricted group (R, 15 % energy restriction) and combined group (RR, 15 % energy restriction and resveratrol 30 mg/kg/d). Liver triacylglycerols (TG) and total cholesterol contents were measured by using commercial kits. Carnitine palmitoyl transferase 1a (CPT 1a) and citrate synthase (CS) activities were measured spectrophotometrically. TFAM (mitochondrial transcription factor A) and peroxisome proliferator-activator receptor alpha (PPARα) protein contents, as well as the ratio acetylated peroxisome proliferator-activated receptor gamma coactivator 1-alpha (PGC1α)/Total PGC1α were analyzed by Western blot. Statistical analysis was performed by using one way ANOVA and Newman-Keuls as post-hoc test. Results: No differences were observed among the four groups regarding liver weight and cholesterol content, but the three treated groups showed reduced TG when compared to the control group, being the restricted groups the ones showing the lowest values (with no differences between them). Higher CPT 1a and CS activities were observed in the groups supplemented with resveratrol (RSV and RR), with no difference between them. The acetylated PGC1α /total PGC1α ratio was lower in the treated groups (RSV, R and RR) than in the control group, with no differences among them. As far as TFAM protein expression is concerned, only the RR group reached a higher value. Finally, no changes were observed in PPARα protein expression. Conclusions: Resveratrol administration is an effective intervention for liver triacylglycerol content reduction, but a mild energy restriction is even more effective. The mechanisms of action of these two strategies are different. Thus resveratrol, but not energy restriction, seems to act by increasing fatty acid oxidation, although mitochondriogenesis seems not to be induced. When both treatments (resveratrol administration and a mild energy restriction) were combined, no additive or synergic effects were appreciated. Acknowledgements: MINECO-FEDER (AGL2015-65719-R), Basque Government (IT-572-13), University of the Basque Country (ELDUNANOTEK UFI11/32), Institut of Health Carlos III (CIBERobn). Iñaki Milton is a fellowship from the Basque Government.

Keywords: energy restriction, fat, liver, oxidation, resveratrol

Procedia PDF Downloads 211
286 Peripheral Neuropathy after Locoregional Anesthesia

Authors: Dalila Chaid, Bennameur Fedilli, Mohammed Amine Bellelou

Abstract:

The study focuses on the experience of lower-limb amputees, who face both physical and psychological challenges due to their disability. Chronic neuropathic pain and various types of limb pain are common in these patients. They often require orthopaedic interventions for issues such as dressings, infection, ulceration, and bone-related problems. Research Aim: The aim of this study is to determine the most suitable anaesthetic technique for lower-limb amputees, which can provide them with the greatest comfort and prolonged analgesia. The study also aims to demonstrate the effectiveness and cost-effectiveness of ultrasound-guided local regional anaesthesia (LRA) in this patient population. Methodology: The study is an observational analytical study conducted over a period of eight years, from 2010 to 2018. It includes a total of 955 cases of revisions performed on lower limb stumps. The parameters analyzed in this study include the effectiveness of the block and the use of sedation, the duration of the block, the post-operative visual analog scale (VAS) scores, and patient comfort. Findings: The study findings highlight the benefits of ultrasound-guided LRA in providing comfort by optimizing post-operative analgesia, which can contribute to psychological and bodily repair in lower-limb amputees. Additionally, the study emphasizes the use of alpha2 agonist adjuvants with sedative and analgesic properties, long-acting local anaesthetics, and larger volumes for better outcomes. Theoretical Importance: This study contributes to the existing knowledge by emphasizing the importance of choosing an appropriate anaesthetic technique for lower-limb amputees. It highlights the potential of ultrasound-guided LRA and the use of specific adjuvants and local anaesthetics in improving post-operative analgesia and overall patient outcomes. Data Collection and Analysis Procedures: Data for this study were collected through the analysis of medical records and relevant documentation related to the 955 cases included in the study. The effectiveness of the anaesthetic technique, duration of the block, post-operative pain scores, and patient comfort were analyzed using statistical methods. Question Addressed: The study addresses the question of which anaesthetic technique would be most suitable for lower-limb amputees to provide them with optimal comfort and prolonged analgesia. Conclusion: The study concludes that ultrasound-guided LRA, along with the use of alpha2 agonist adjuvants, long-acting local anaesthetics, and larger volumes, can be an effective approach in providing comfort and improving post-operative analgesia for lower-limb amputees. This technique can potentially contribute to the psychological and bodily repair of these patients. The findings of this study have implications for clinical practice in the management of lower-limb amputees, highlighting the importance of personalized anaesthetic approaches for better outcomes.

Keywords: neuropathic pain, ultrasound-guided peripheral nerve block, DN4 quiz, EMG

Procedia PDF Downloads 78
285 Basic Life Support Training in Rural Uganda: A Mixed Methods Study of Training and Attitudes towards Resuscitation

Authors: William Gallagher, Harriet Bothwell, Lowri Evans, Kevin Jones

Abstract:

Background: Worldwide, a third of adult deaths are caused by cardiovascular disease, a high proportion occurring in the developing world. Contributing to these poor outcomes are suboptimal assessments, treatments and monitoring of the acutely unwell patient. Successful training in trauma and neonates is recognised in the developing world but there is little literature supporting adult resuscitation. As far as the authors are aware no literature has been published on resuscitation training in Uganda since 2000 when a resuscitation training officer ran sessions in neonatal and paediatric resuscitation. The aim of this project was to offer training in Basic Life Support ( BLS) to staff and healthcare students based at Villa Maria Hospital in the Kalungu District, Central Uganda. This project was undertaken as a student selected component (SSC) offered by Swindon Academy, based at the Great Western Hospital, to medical students in their fourth year of the undergraduate programme. Methods: Semi-structured, informal interviews and focus groups were conducted with different clinicians in the hospital. These interviews were designed to focus on the level of training and understanding of BLS. A training session was devised which focused on BLS (excluding the use of an automatic external defribrillator) involving pre and post-training questionnaires and clinical assessments. Three training sessions were run for different cohorts: a pilot session for 5 Ugandan medical students, a second session for a group of 8 nursing and midwifery students and finally, a third was devised for physicians. The data collected was analysed in excel. Paired T-Tests determined statistical significance between pre and post-test scores and confidence before and after the sessions. Average clinical skill assessment scores were converted to percentages based on the area of BLS being assessed. Results: 27 participants were included in the analysis. 14 received ‘small group training’ whilst 13 received’ large group training’ 88% of all participants had received some form of resuscitation training. Of these, 46% had received theory training, 27% practical training and only 15% received both. 12% had received no training. On average, all participants demonstrated a significant increase of 5.3 in self-assessed confidence (p <0.05). On average, all participants thought the session was very useful. Analysis of qualitative date from clinician interviews in ongoing but identified themes identified include rescue breaths being considered the most important aspect resuscitation and doubts of a ‘good’ outcome from resuscitation. Conclusions: The results of this small study reflect the need for regular formal training in BLS in low resource settings. The active engagement and positive opinions concerning the utility of the training are promising as well as the evidence of improvement in knowledge.

Keywords: basic life support, education, resuscitation, sub-Saharan Africa, training, Uganda

Procedia PDF Downloads 148
284 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 74
283 Theoretical-Methodological Model to Study Vulnerability of Death in the Past from a Bioarchaeological Approach

Authors: Geraldine G. Granados Vazquez

Abstract:

Every human being is exposed to the risk of dying; wherein some of them are more susceptible than others depending on the cause. Therefore, the cause could be the hazard to die that a group or individual has, making this irreversible damage the condition of vulnerability. Risk is a dynamic concept; which means that it depends on the environmental, social, economic and political conditions. Thus vulnerability may only be evaluated in terms of relative parameters. This research is focusing specifically on building a model that evaluate the risk or propensity of death in past urban societies in connection with the everyday life of individuals, considering that death can be a consequence of two coexisting issues: hazard and the deterioration of the resistance to destruction. One of the most important discussions in bioarchaeology refers to health and life conditions in ancient groups; the researchers are looking for more flexible models that evaluate these topics. In that way, this research proposes a theoretical-methodological model that assess the vulnerability of death in past urban groups. This model pretends to be useful to evaluate the risk of death, considering their sociohistorical context, and their intrinsic biological features. This theoretical and methodological model, propose four areas to assess vulnerability. The first three areas use statistical methods or quantitative analysis. While the last and fourth area, which corresponds to the embodiment, is based on qualitative analysis. The four areas and their techniques proposed are a) Demographic dynamics. From the distribution of age at the time of death, the analysis of mortality will be performed using life tables. From here, four aspects may be inferred: population structure, fertility, mortality-survival, and productivity-migration, b) Frailty. Selective mortality and heterogeneity in frailty can be assessed through the relationship between characteristics and the age at death. There are two indicators used in contemporary populations to evaluate stress: height and linear enamel hypoplasias. Height estimates may account for the individual’s nutrition and health history in specific groups; while enamel hypoplasias are an account of the individual’s first years of life, c) Inequality. Space reflects various sectors of society, also in ancient cities. In general terms, the spatial analysis uses measures of association to show the relationship between frail variables and space, d) Embodiment. The story of everyone leaves some evidence on the body, even in the bones. That led us to think about the dynamic individual's relations in terms of time and space; consequently, the micro analysis of persons will assess vulnerability from the everyday life, where the symbolic meaning also plays a major role. In sum, using some Mesoamerica examples, as study cases, this research demonstrates that not only the intrinsic characteristics related to the age and sex of individuals are conducive to vulnerability, but also the social and historical context that determines their state of frailty before death. An attenuating factor for past groups is that some basic aspects –such as the role they played in everyday life– escape our comprehension, and are still under discussion.

Keywords: bioarchaeology, frailty, Mesoamerica, vulnerability

Procedia PDF Downloads 225
282 Correlation between the Levels of Some Inflammatory Cytokines/Haematological Parameters and Khorana Scores of Newly Diagnosed Ambulatory Cancer Patients

Authors: Angela O. Ugwu, Sunday Ocheni

Abstract:

Background: Cancer-associated thrombosis (CAT) is a cause of morbidity and mortality among cancer patients. Several risk factors for developing venous thromboembolism (VTE) also coexist with cancer patients, such as chemotherapy and immobilization, thus contributing to the higher risk of VTE in cancer patients when compared to non-cancer patients. This study aimed to determine if there is any correlation between levels of some inflammatory cytokines/haematological parameters and Khorana scores of newly diagnosed chemotherapy naïve ambulatory cancer patients (CNACP). Methods: This was a cross-sectional analytical study carried out from June 2021 to May 2022. Eligible newly diagnosed cancer patients 18 years and above (case group) were enrolled consecutively from the adult Oncology Clinics of the University of Nigeria Teaching Hospital, Ituku/Ozalla (UNTH). The control group was blood donors at UNTH Ituku/Ozalla, Enugu blood bank, and healthy members of the Medical and Dental Consultants Association of Nigeria (MDCAN), UNTH Chapter. Blood samples collected from the participants were assayed for IL-6, TNF-Alpha, and haematological parameters such as haemoglobin, white blood cell count (WBC), and platelet count. Data were entered into an Excel worksheet and were then analyzed using Statistical Package for Social Sciences (SPSS) computer software version 21.0 for windows. A P value of < 0.05 was considered statistically significant. Results: A total of 200 participants (100 cases and 100 controls) were included in the study. The overall mean age of the participants was 47.42 ±15.1 (range 20-76). The sociodemographic characteristics of the two groups, including age, sex, educational level, body mass index (BMI), and occupation, were similar (P > 0.05). Following One Way ANOVA, there were significant differences between the mean levels of interleukin-6 (IL-6) (p = 0.036) and tumor necrotic factor-α (TNF-α) (p = 0.001) in the three Khorana score groups of the case group. Pearson’s correlation analysis showed a significant positive correlation between the Khorana scores and IL-6 (r=0.28, p = 0.031), TNF-α (r= 0.254, p= 0.011), and PLR (r= 0.240, p=0.016). The mean serum levels of IL-6 were significantly higher in CNACP than in the healthy controls [8.98 (8-12) pg/ml vs. 8.43 (2-10) pg/ml, P=0.0005]. There were also significant differences in the mean levels of the haemoglobin (Hb) level (P < 0.001)); white blood cell (WBC) count ((P < 0.001), and platelet (PL) count (P = 0.005) between the two groups of participants. Conclusion: There is a significant positive correlation between the serum levels of IL-6, TNF-α, and PLR and the Khorana scores of CNACP. The mean serum levels of IL-6, TNF-α, PLR, WBC, and PL count were significantly higher in CNACP than in the healthy controls. Ambulatory cancer patients with high-risk Khorana scores may benefit from anti-inflammatory drugs because of the positive correlation with inflammatory cytokines. Recommendations: Ambulatory cancer patients with 2 Khorana scores may benefit from thromboprophylaxis since they have higher Khorana scores. A multicenter study with a heterogeneous population and larger sample size is recommended in the future to further elucidate the relationship between IL-6, TNF-α, PLR, and the Khorana scores among cancer patients in the Nigerian population.

Keywords: thromboprophylaxis, cancer, Khorana scores, inflammatory cytokines, haematological parameters

Procedia PDF Downloads 82
281 Prevalence and Risk Factors of Musculoskeletal Disorders among School Teachers in Mangalore: A Cross Sectional Study

Authors: Junaid Hamid Bhat

Abstract:

Background: Musculoskeletal disorders are one of the main causes of occupational illness. Mechanisms and the factors like repetitive work, physical effort and posture, endangering the risk of musculoskeletal disorders would now appear to have been properly identified. Teacher’s exposure to work-related musculoskeletal disorders appears to be insufficiently described in the literature. Little research has investigated the prevalence and risk factors of musculoskeletal disorders in teaching profession. Very few studies are available in this regard and there are no studies evident in India. Purpose: To determine the prevalence of musculoskeletal disorders and to identify and measure the association of such risk factors responsible for developing musculoskeletal disorders among school teachers. Methodology: An observational cross sectional study was carried out. 500 school teachers from primary, middle, high and secondary schools were selected, based on eligibility criteria. A signed consent was obtained and a self-administered, validated questionnaire was used. Descriptive statistics was used to compute the statistical mean and standard deviation, frequency and percentage to estimate the prevalence of musculoskeletal disorders among school teachers. The data analysis was done by using SPSS version 16.0. Results: Results indicated higher pain prevalence (99.6%) among school teachers during the past 12 months. Neck pain (66.1%), low back pain (61.8%) and knee pain (32.0%) were the most prevalent musculoskeletal complaints of the subjects. Prevalence of shoulder pain was also found to be high among school teachers (25.9%). 52.0% subjects reported pain as disabling in nature, causing sleep disturbance (44.8%) and pain was found to be associated with work (87.5%). A significant association was found between musculoskeletal disorders and sick leaves/absenteeism. Conclusion: Work-related musculoskeletal disorders particularly neck pain, low back pain, and knee pain, is highly prevalent and risk factors are responsible for the development of same in school teachers. There is little awareness of musculoskeletal disorders among school teachers, due to work load and prolonged/static postures. Further research should concentrate on specific risk factors like repetitive movements, psychological stress, and ergonomic factors and should be carried out all over the country and the school teachers should be studied carefully over a period of time. Also, an ergonomic investigation is needed to decrease the work-related musculoskeletal disorder problems. Implication: Recall bias and self-reporting can be considered as limitations. Also, cause and effect inferences cannot be ascertained. Based on these results, it is important to disseminate general recommendations for prevention of work-related musculoskeletal disorders with regards to the suitability of furniture, equipment and work tools, environmental conditions, work organization and rest time to school teachers. School teachers in the early stage of their careers should try to adapt the ergonomically favorable position whilst performing their work for a safe and healthy life later. Employers should be educated on practical aspects of prevention to reduce musculoskeletal disorders, since changes in workplace and work organization and physical/recreational activities are required.

Keywords: work related musculoskeletal disorders, school teachers, risk factors funding, medical and health sciences

Procedia PDF Downloads 277
280 Cardiac Rehabilitation Program and Health-Related Quality of Life; A Randomized Control Trial

Authors: Zia Ul Haq, Saleem Muhammad, Naeem Ullah, Abbas Shah, Abdullah Shah

Abstract:

Pakistan being the developing country is facing double burden of communicable and non-communicable disease. The aspect of secondary prevention of ischemic heart disease in developing countries is the dire need for public health specialists, clinicians and policy makers. There is some evidence that psychotherapeutic measures, including psychotherapy, recreation, exercise and stress management training have positive impact on secondary prevention of cardiovascular diseases but there are some contradictory findings as well. Cardiac rehabilitation program (CRP) has not yet fully implemented in Pakistan. Psychological, physical and specific health-related quality of life (HRQoL) outcomes needs assessment with respect to its practicality, effectiveness, and success. Objectives: To determine the effect of cardiac rehabilitation program (CRP) on the health-related quality of life (HRQoL) measures of post MI patients compared to the usual care. Hypothesis: Post MI patients who receive the interventions (CRP) will have better HRQoL as compared to those who receive the usual cares. Methods: The randomized control trial was conducted at a Cardiac Rehabilitation Unit of Lady Reading Hospital (LRH), Peshawar. LRH is the biggest hospital of the Province Khyber Pakhtunkhwa (KP). A total 206 participants who had recent first myocardial infarction were inducted in the study. Participants were randomly allocated into two group i.e. usual care group (UCG) and cardiac rehabilitation group (CRG) by permuted-block randomization (PBR) method. CRP was conducted in CRG in two phases. Three HRQoL outcomes i.e. general health questionnaire (GHQ), self-rated health (SRH) and MacNew quality of life after myocardial infarction (MacNew QLMI) were assessed at baseline and follow-up visits among both groups. Data were entered and analyzed by appropriate statistical test in STATA version 12. Results: A total of 195 participants were assessed at the follow-up period due to lost-to-follow-up. The mean age of the participants was 53.66 + 8.3 years. Males were dominant in both groups i.e. 150 (76.92%). Regarding educational status, majority of the participants were illiterate in both groups i.e. 128 (65.64%). Surprisingly, there were 139 (71.28%) who were non-smoker on the whole. The comorbid status was positive in 120 (61.54%) among all the patients. The SRH at follow-up among UCG and CRG was 4.06 (95% CI: 3.93, 4.19) and 2.36 (95% CI: 2.2, 2.52) respectively (p<0.001). GHQ at the follow-up of UCG and CRG was 20.91 (95% CI: 18.83, 21.97) and 7.43 (95% CI: 6.59, 8.27) respectively (p<0.001). The MacNew QLMI at follow-up of UCG and CRG was 3.82 (95% CI: 3.7, 3.94) and 5.62 (95% CI: 5.5, 5.74) respectively (p<0.001). All the HRQoL measures showed strongly significant improvement in the CRG at follow-up period. Conclusion: HRQOL improved in post MI patients after comprehensive CRP. Education of the patients and their supervision is needed when they are involved in their rehabilitation activities. It is concluded that establishing CRP in cardiac units, recruiting post-discharged MI patients and offering them CRP does not impose high costs and can result in significant improvement in HRQoL measures. Trial registration no: ACTRN12617000832370

Keywords: cardiovascular diseases, cardiac rehabilitation, health-related quality of life, HRQoL, myocardial infarction, quality of life, QoL, rehabilitation, randomized control trial

Procedia PDF Downloads 227
279 Statistical Analysis to Compare between Smart City and Traditional Housing

Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh

Abstract:

Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and design

Keywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving

Procedia PDF Downloads 113
278 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement

Authors: Rajkumar Ghosh

Abstract:

Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.

Keywords: earthquake, out-of-sequence thrust, disaster, human life

Procedia PDF Downloads 77
277 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells

Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez

Abstract:

Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.

Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation

Procedia PDF Downloads 249