Search results for: power production performance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22954

Search results for: power production performance

784 The Effect of Configuration Space and Visual Perception in Public Space Usage at Villa Bukit Tidar Housing in Malang City

Authors: Aisyiyah Fauziah Rahmah

Abstract:

Generally, an urban city has a rapid growth, it has frequent a variety of problems, especially of convenience in public space usage. The density of population in urban areas and the high activity is also indicated as a cause of urban resident lifestyle for the worse in social relationships and allow for the stress. Streets and green space (parks) are the only public space in a residential area which is used as a place to build social activity, to meet and interact with the other housing dweller. The high level of activity and social interaction that occurs will affect the spatial arrangement. It can be effected the space structures in housing more complex. Ease in access to public space is the reason many dweller prefer doing social activities there. Hillier in Carmona et al (2003) explains that the pattern and intensity of movement of individuals is influenced by the configuration of space, even the space structure can be regarded as the single most influential determinant of movements in the space. Whyte in Zhang and Lawson (2009) also suggest some factors such as seats, trees, water and legibility of space encourage people to stay in public outdoor space. Furthermore this activities can attract more activities. Villa Bukit Tidar is a housing in Lowokwaru District which highest number of people in Malang City, so social activity is also high there. It has natural and recreational concept and provided with view of Malang City from heights. This potential is able to attract the people who live there to stay in public outdoor space and doing activities there. From this study we can find whether the ease of access to public space and visual satisfaction of Villa Bukit Tidar housing affect the usage of public space. This study was carried out by observing the streets pattern and plot pattern to know the configuration space of Villa Bukit Tidar housing through values of connectivity and integrity by resulting from space sintax analysis. Distributing questionnaires also carried out to determine the level of satisfaction and importance perception of visual condition in the public space in Villa Bukit Tidar housing through Important Performance Analysis (IPA). Results of this research indicated that the public spaces in Villa Bukit Tidar housing who has high connectivity and integrity is considered to be visually satisfied and it has a higher public space usage than has low connectivity and integrity are considered to be visually dissatisfied.

Keywords: configuration space, visual perception, social activities, public space usage

Procedia PDF Downloads 476
783 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka

Authors: Sakshi Dhumale, Madhushree C., Amba Shetty

Abstract:

The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.

Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability

Procedia PDF Downloads 41
782 Braille Lab: A New Design Approach for Social Entrepreneurship and Innovation in Assistive Tools for the Visually Impaired

Authors: Claudio Loconsole, Daniele Leonardis, Antonio Brunetti, Gianpaolo Francesco Trotta, Nicholas Caporusso, Vitoantonio Bevilacqua

Abstract:

Unfortunately, many people still do not have access to communication, with specific regard to reading and writing. Among them, people who are blind or visually impaired, have several difficulties in getting access to the world, compared to the sighted. Indeed, despite technology advancement and cost reduction, nowadays assistive devices are still expensive such as Braille-based input/output systems which enable reading and writing texts (e.g., personal notes, documents). As a consequence, assistive technology affordability is fundamental in supporting the visually impaired in communication, learning, and social inclusion. This, in turn, has serious consequences in terms of equal access to opportunities, freedom of expression, and actual and independent participation to a society designed for the sighted. Moreover, the visually impaired experience difficulties in recognizing objects and interacting with devices in any activities of daily living. It is not a case that Braille indications are commonly reported only on medicine boxes and elevator keypads. Several software applications for the automatic translation of written text into speech (e.g., Text-To-Speech - TTS) enable reading pieces of documents. However, apart from simple tasks, in many circumstances TTS software is not suitable for understanding very complicated pieces of text requiring to dwell more on specific portions (e.g., mathematical formulas or Greek text). In addition, the experience of reading\writing text is completely different both in terms of engagement, and from an educational perspective. Statistics on the employment rate of blind people show that learning to read and write provides the visually impaired with up to 80% more opportunities of finding a job. Especially in higher educational levels, where the ability to digest very complex text is key, accessibility and availability of Braille plays a fundamental role in reducing drop-out rate of the visually impaired, thus affecting the effectiveness of the constitutional right to get access to education. In this context, the Braille Lab project aims at overcoming these social needs by including affordability in designing and developing assistive tools for visually impaired people. In detail, our awarded project focuses on a technology innovation of the operation principle of existing assistive tools for the visually impaired leaving the Human-Machine Interface unchanged. This can result in a significant reduction of the production costs and consequently of tool selling prices, thus representing an important opportunity for social entrepreneurship. The first two assistive tools designed within the Braille Lab project following the proposed approach aims to provide the possibility to personally print documents and handouts and to read texts written in Braille using refreshable Braille display, respectively. The former, named ‘Braille Cartridge’, represents an alternative solution for printing in Braille and consists in the realization of an electronic-controlled dispenser printing (cartridge) which can be integrated within traditional ink-jet printers, in order to leverage the efficiency and cost of the device mechanical structure which are already being used. The latter, named ‘Braille Cursor’, is an innovative Braille display featuring a substantial technology innovation by means of a unique cursor virtualizing Braille cells, thus limiting the number of active pins needed for Braille characters.

Keywords: Human rights, social challenges and technology innovations, visually impaired, affordability, assistive tools

Procedia PDF Downloads 258
781 AI-Powered Conversation Tools - Chatbots: Opportunities and Challenges That Present to Academics within Higher Education

Authors: Jinming Du

Abstract:

With the COVID-19 pandemic beginning in 2020, many higher education institutions and education systems are turning to hybrid or fully distance online courses to maintain social distance and provide a safe virtual space for learning and teaching. However, the majority of faculty members were not well prepared for the shift to blended or distance learning. Communication frustrations are prevalent in both hybrid and full-distance courses. A systematic literature review was conducted by a comprehensive analysis of 1688 publications that focused on the application of the adoption of chatbots in education. This study aimed to explore instructors' experiences with chatbots in online and blended undergraduate English courses. Language learners are overwhelmed by the variety of information offered by many online sites. The recently emerged chatbots (e.g.: ChatGPT) are slightly superior in performance as compared to those traditional through previous technologies such as tapes, video recorders, and websites. The field of chatbots has been intensively researched, and new methods have been developed to demonstrate how students can best learn and practice a new language in the target language. However, it is believed that among the many areas where chatbots are applied, while chatbots have been used as effective tools for communicating with business customers, in consulting and targeting areas, and in the medical field, chatbots have not yet been fully explored and implemented in the field of language education. This issue is challenging enough for language teachers; they need to study and conduct research carefully to clarify it. Pedagogical chatbots may alleviate the perception of a lack of communication and feedback from instructors by interacting naturally with students through scaffolding the understanding of those learners, much like educators do. However, educators and instructors lack the proficiency to effectively operate this emerging AI chatbot technology and require comprehensive study or structured training to attain competence. There is a gap between language teachers’ perceptions and recent advances in the application of AI chatbots to language learning. The results of the study found that although the teachers felt that the chatbots did the best job of giving feedback, the teachers needed additional training to be able to give better instructions and to help them assist in teaching. Teachers generally perceive the utilization of chatbots to offer substantial assistance to English language instruction.

Keywords: artificial intelligence in education, chatbots, education and technology, education system, pedagogical chatbot, chatbots and language education

Procedia PDF Downloads 51
780 Building Academic Success and Resilience in Social Work Students: An Application of Self-Determination Theory

Authors: Louise Bunce, Jill Childs, Adam J. Lonsdale, Naomi King

Abstract:

A major concern for the Social Work profession concerns the frequency of burn-out and high turnover of staff. The characteristic of resilience has been identified as playing a crucial role in social workers’ ability to have a satisfying and successful career. Thus a critical role for social work education is to develop resilience in social work students. We currently need to know more about how to train resilient social workers who will also increase the academic standing of the profession. The specific aim of this research was to quantify characteristics that may contribute towards resilience and academic success among student social workers in order to mitigate against the problems of burn-out and low academic standing. These three characteristics were competence (effectiveness at mastering the environment), autonomy (sense of control and free will), and relatedness (interacting and connecting with others), as specified in Self-Determination Theory (SDT). When these three needs are satisfied, we experience higher degrees of motivation to succeed and wellbeing. Thus when these three needs are met in social work students, they have the potential to raise academic standards and promote wellbeing characteristics that contribute to the development of resilience. The current study tested the hypothesis that higher levels of autonomy, competence, and relatedness, as defined by SDT, will predict levels of academic success and resilience in social work students. Two hundred and ten social work students studying at a number of universities completed well-established questionnaires to assess autonomy, competence, and relatedness, level of academic performance and resilience (The Brief Resilience Scale). In this scale, students rated their agreement with items e.g., ‘I bounce back quickly after hard times’ and ‘I usually come through difficult times with little struggle’. After controlling for various factors, including age, gender, ethnicity, and course (undergraduate or postgraduate) preliminary analysis revealed that the components of SDT provided useful predictive value for academic success and resilience. In particular, autonomy and competence provided a useful predictor of academic success while relatedness was a particularly useful predictor of resilience. This study demonstrated that SDT provides a valuable framework for helping to understand what predicts academic success and resilience among social work students. This is relevant because the psychological needs for autonomy, competence and relatedness can be affected by external social and cultural pressures, thus they can be improved by the right type of supportive teaching practices and educational environments. These findings contribute to the growing evidence-base to help build an academic and resilient social worker student body and workforce.

Keywords: education, resilience, self-determination theory, student social workers

Procedia PDF Downloads 320
779 Bridging Minds and Nature: Revolutionizing Elementary Environmental Education Through Artificial Intelligence

Authors: Hoora Beheshti Haradasht, Abooali Golzary

Abstract:

Environmental education plays a pivotal role in shaping the future stewards of our planet. Leveraging the power of artificial intelligence (AI) in this endeavor presents an innovative approach to captivate and educate elementary school children about environmental sustainability. This paper explores the application of AI technologies in designing interactive and personalized learning experiences that foster curiosity, critical thinking, and a deep connection to nature. By harnessing AI-driven tools, virtual simulations, and personalized content delivery, educators can create engaging platforms that empower children to comprehend complex environmental concepts while nurturing a lifelong commitment to protecting the Earth. With the pressing challenges of climate change and biodiversity loss, cultivating an environmentally conscious generation is imperative. Integrating AI in environmental education revolutionizes traditional teaching methods by tailoring content, adapting to individual learning styles, and immersing students in interactive scenarios. This paper delves into the potential of AI technologies to enhance engagement, comprehension, and pro-environmental behaviors among elementary school children. Modern AI technologies, including natural language processing, machine learning, and virtual reality, offer unique tools to craft immersive learning experiences. Adaptive platforms can analyze individual learning patterns and preferences, enabling real-time adjustments in content delivery. Virtual simulations, powered by AI, transport students into dynamic ecosystems, fostering experiential learning that goes beyond textbooks. AI-driven educational platforms provide tailored content, ensuring that environmental lessons resonate with each child's interests and cognitive level. By recognizing patterns in students' interactions, AI algorithms curate customized learning pathways, enhancing comprehension and knowledge retention. Utilizing AI, educators can develop virtual field trips and interactive nature explorations. Children can navigate virtual ecosystems, analyze real-time data, and make informed decisions, cultivating an understanding of the delicate balance between human actions and the environment. While AI offers promising educational opportunities, ethical concerns must be addressed. Safeguarding children's data privacy, ensuring content accuracy, and avoiding biases in AI algorithms are paramount to building a trustworthy learning environment. By merging AI with environmental education, educators can empower children not only with knowledge but also with the tools to become advocates for sustainable practices. As children engage in AI-enhanced learning, they develop a sense of agency and responsibility to address environmental challenges. The application of artificial intelligence in elementary environmental education presents a groundbreaking avenue to cultivate environmentally conscious citizens. By embracing AI-driven tools, educators can create transformative learning experiences that empower children to grasp intricate ecological concepts, forge an intimate connection with nature, and develop a strong commitment to safeguarding our planet for generations to come.

Keywords: artificial intelligence, environmental education, elementary children, personalized learning, sustainability

Procedia PDF Downloads 65
778 The Challenges of Well Integrity on Plug and Abandoned Wells for Offshore Co₂ Storage Site Containment

Authors: Siti Noor Syahirah Mohd Sabri

Abstract:

The oil and gas industry is committed to net zero carbon emissions because the consequences of climate change could be catastrophic unless responded to very soon. One way of reducing CO₂ emissions is to inject it into a depleted reservoir buried underground. This greenhouse gas reduction technique significantly reduces CO₂ released into the atmosphere. In general, depleted oil and gas reservoirs provide readily available sites for the storage of CO₂ in offshore areas. This is mainly due to the hydrocarbons have been optimally produced and the existence of voids for effective CO₂ storage. Hence, make it a good candidate for a CO₂ well injector location. Geological storage sites are often evaluated in terms of capacity, injectivity and containment. Leakage through the cap rock or existing well is the main concern in the depleted fields. In order to develop these fields as CO₂ storage sites, the long-term integrity of wells drilled in these oil & gas fields must be ascertained to ensure good CO₂ containment. Well, integrity is often defined as the ability to contain fluids without significant leakage through the project lifecycle. Most plugged and abandoned (P & A) wells in Peninsular Malaysia have drilled 20 – 30 years ago and were not designed to withstand downhole conditions having >50%vol CO₂ and CO₂/H₂O mixture. In addition, Corrosive-Resistant Alloy (CRA) tubular and CO₂-resistant cement was not used during good construction. The reservoir pressure and temperature conditions may have further degraded the material strength and elevated the corrosion rate. Understanding all the uncertainties that may have affected cement-casing bonds, such as the quality of cement behind the casing, subsidence effect, corrosion rate, etc., is the first step toward well integrity evaluation. Secondly, proper quantification of all the uncertainties involved needs to be done to ensure long-term underground storage objectives of CO₂ are achieved. This paper will discuss challenges associated with estimating the performance of well barrier elements in existing P&A wells. Risk ranking of the existing P&A wells is to be carried out in order to ensure the integrity of the storage site is maintained for long-term CO₂ storage. High-risk existing P&A wells are to be re-entered to restore good integrity and to reduce future leakage that may happen. In addition, the requirement to design a fit-for-purpose monitoring and mitigation technology package for potential CO₂ leakage/seepage in the marine environment will be discussed accordingly. The holistic approach will ensure that the integrity is maintained, and CO₂ is contained underground for years to come.

Keywords: CCUS, well integrity, co₂ storage, offshore

Procedia PDF Downloads 81
777 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques

Authors: Stefan K. Behfar

Abstract:

The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.

Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing

Procedia PDF Downloads 60
776 The Characterization and Optimization of Bio-Graphene Derived From Oil Palm Shell Through Slow Pyrolysis Environment and Its Electrical Conductivity and Capacitance Performance as Electrodes Materials in Fast Charging Supercapacitor Application

Authors: Nurhafizah Md. Disa, Nurhayati Binti Abdullah, Muhammad Rabie Bin Omar

Abstract:

This research intends to identify the existing knowledge gap because of the lack of substantial studies to fabricate and characterize bio-graphene created from Oil Palm Shell (OPS) through the means of pre-treatment and slow pyrolysis. By fabricating bio-graphene through OPS, a novel material can be found to procure and used for graphene-based research. The characterization of produced bio-graphene is intended to possess a unique hexagonal graphene pattern and graphene properties in comparison to other previously fabricated graphene. The OPS will be fabricated by pre-treatment of zinc chloride (ZnCl₂) and iron (III) chloride (FeCl3), which then induced the bio-graphene thermally by slow pyrolysis. The pyrolizer's final temperature and resident time will be set at 550 °C, 5/min, and 1 hour respectively. Finally, the charred product will be washed with hydrochloric acid (HCL) to remove metal residue. The obtained bio-graphene will undergo different analyses to investigate the physicochemical properties of the two-dimensional layer of carbon atoms with sp2 hybridization hexagonal lattice structure. The analysis that will be taking place is Raman Spectroscopy (RAMAN), UV-visible spectroscopy (UV-VIS), Transmission Electron Microscopy (TEM), Scanning Electron Microscopy (SEM), and X-Ray Diffraction (XRD). In retrospect, RAMAN is used to analyze three key peaks found in graphene, namely D, G, and 2D peaks, which will evaluate the quality of the bio-graphene structure and the number of layers generated. To compare and strengthen graphene layer resolves, UV-VIS may be used to establish similar results of graphene layer from last layer analysis and also characterize the types of graphene procured. A clear physical image of graphene can be obtained by analyzation of TEM in order to study structural quality and layers condition and SEM in order to study the surface quality and repeating porosity pattern. Lastly, establishing the crystallinity of the produced bio-graphene, simultaneously as an oxygen contamination factor and thus pristineness of the graphene can be done by XRD. In the conclusion of this paper, this study is able to obtain bio-graphene through OPS as a novel material in pre-treatment by chloride ZnCl₂ and FeCl3 and slow pyrolization to provide a characterization analysis related to bio-graphene that will be beneficial for future graphene-related applications. The characterization should yield similar findings to previous papers as to confirm graphene quality.

Keywords: oil palm shell, bio-graphene, pre-treatment, slow pyrolysis

Procedia PDF Downloads 74
775 Local Binary Patterns-Based Statistical Data Analysis for Accurate Soccer Match Prediction

Authors: Mohammad Ghahramani, Fahimeh Saei Manesh

Abstract:

Winning a soccer game is based on thorough and deep analysis of the ongoing match. On the other hand, giant gambling companies are in vital need of such analysis to reduce their loss against their customers. In this research work, we perform deep, real-time analysis on every soccer match around the world that distinguishes our work from others by focusing on particular seasons, teams and partial analytics. Our contributions are presented in the platform called “Analyst Masters.” First, we introduce various sources of information available for soccer analysis for teams around the world that helped us record live statistical data and information from more than 50,000 soccer matches a year. Our second and main contribution is to introduce our proposed in-play performance evaluation. The third contribution is developing new features from stable soccer matches. The statistics of soccer matches and their odds before and in-play are considered in the image format versus time including the halftime. Local Binary patterns, (LBP) is then employed to extract features from the image. Our analyses reveal incredibly interesting features and rules if a soccer match has reached enough stability. For example, our “8-minute rule” implies if 'Team A' scores a goal and can maintain the result for at least 8 minutes then the match would end in their favor in a stable match. We could also make accurate predictions before the match of scoring less/more than 2.5 goals. We benefit from the Gradient Boosting Trees, GBT, to extract highly related features. Once the features are selected from this pool of data, the Decision trees decide if the match is stable. A stable match is then passed to a post-processing stage to check its properties such as betters’ and punters’ behavior and its statistical data to issue the prediction. The proposed method was trained using 140,000 soccer matches and tested on more than 100,000 samples achieving 98% accuracy to select stable matches. Our database from 240,000 matches shows that one can get over 20% betting profit per month using Analyst Masters. Such consistent profit outperforms human experts and shows the inefficiency of the betting market. Top soccer tipsters achieve 50% accuracy and 8% monthly profit in average only on regional matches. Both our collected database of more than 240,000 soccer matches from 2012 and our algorithm would greatly benefit coaches and punters to get accurate analysis.

Keywords: soccer, analytics, machine learning, database

Procedia PDF Downloads 227
774 Epidemiological and Clinical Characteristics of Five Rare Pathological Subtypes of Hepatocellular Carcinoma

Authors: Xiaoyuan Chen

Abstract:

Background: This study aimed to characterize the epidemiological and clinical features of five rare subtypes of hepatocellular carcinoma (HCC) and to create a competing risk nomogram for predicting cancer-specific survival. Methods: This study used the Surveillance, Epidemiology, and End Results database to analyze the clinicopathological data of 50,218 patients with classic HCC and five rare subtypes (ICD-O-3 Histology Code=8170/3-8175/3) between 2004 and 2018. The annual percent change (APC) was calculated using Joinpoint regression, and a nomogram was developed based on multivariable competing risk survival analyses. The prognostic performance of the nomogram was evaluated using the Akaike information criterion, Bayesian information criterion, C-index, calibration curve, and area under the receiver operating characteristic curve. Decision curve analysis was used to assess the clinical value of the models. Results: The incidence of scirrhous carcinoma showed a decreasing trend (APC=-6.8%, P=0.025), while the morbidity of other rare subtypes remained stable from 2004 to 2018. The incidence-based mortality plateau in all subtypes during the period. Clear cell carcinoma was the most common subtype (n=551, 1.1%), followed by fibrolamellar (n=241, 0.5%), scirrhous (n=82, 0.2%), spindle cell (n=61, 0.1%), and pleomorphic (n=17, ~0%) carcinomas. Patients with fibrolamellar carcinoma were younger and more likely to have non-cirrhotic liver and better prognoses. Scirrhous carcinoma shared almost the same macro clinical characteristics and outcomes as classic HCC. Clear cell carcinoma tended to occur in the Asia-Pacific elderly male population, and more than half of them were large HCC (Size>5cm). Sarcomatoid (including spindle cell and pleomorphic) carcinoma was associated with larger tumor size, poorer differentiation, and more dismal prognoses. The pathological subtype, T stage, M stage, surgery, alpha-fetoprotein, and cancer history were identified as independent predictors in patients with rare subtypes. The nomogram showed good calibration, discrimination, and net benefits in clinical practice. Conclusion: The rare subtypes of HCC had distinct clinicopathological features and biological behaviors compared with classic HCC. Our findings could provide a valuable reference for clinicians. The constructed nomogram could accurately predict prognoses, which is beneficial for individualized management.

Keywords: hepatocellular carcinoma, pathological subtype, fibrolamellar carcinoma, scirrhous carcinoma, clear cell carcinoma, spindle cell carcinoma, pleomorphic carcinoma

Procedia PDF Downloads 61
773 Bioinspired Green Synthesis of Magnetite Nanoparticles Using Room-Temperature Co-Precipitation: A Study of the Effect of Amine Additives on Particle Morphology in Fluidic Systems

Authors: Laura Norfolk, Georgina Zimbitas, Jan Sefcik, Sarah Staniland

Abstract:

Magnetite nanoparticles (MNP) have been an area of increasing research interest due to their extensive applications in industry, such as in carbon capture, water purification, and crucially, the biomedical industry. The use of MNP in the biomedical industry is rising, with studies on their effect as Magnetic resonance imaging contrast agents, drug delivery systems, and as hyperthermic cancer treatments becoming prevalent in the nanomaterial research community. Particles used for biomedical purposes must meet stringent criteria; the particles must have consistent shape and size between particles. Variation between particle morphology can drastically alter the effective surface area of the material, making it difficult to correctly dose particles that are not homogeneous. Particles of defined shape such as octahedral and cubic have been shown to outperform irregular shaped particles in some applications, leading to the need to synthesize particles of defined shape. In nature, highly homogeneous MNP are found within magnetotactic bacteria, a unique bacteria capable of producing magnetite nanoparticles internally under ambient conditions. Biomineralisation proteins control the properties of the MNPs, enhancing their homogeneity. One of these proteins, Mms6, has been successfully isolated and used in vitro as an additive in room-temperature co-precipitation reactions (RTCP) to produce particles of defined mono-dispersed size & morphology. When considering future industrial scale-up it is crucial to consider the costs and feasibility of an additive, as an additive that is not readily available or easily synthesized at a competitive price will not be sustainable. As such, additives selected for this research are inspired by the functional groups of biomineralisation proteins, but cost-effective, environmentally friendly, and compatible with scale-up. Diethylenetriamine (DETA), triethylenetetramine (TETA), tetraethylenepentamine (TEPA), and pentaethylenehexamine (PEHA) have been successfully used in RTCP to modulate the properties of particles synthesized, leading to the formation of octahedral nanoparticles with no use of organic solvents, heating, or toxic precursors. By extending this principle to a fluidic system, ongoing research will reveal whether the amine additives can also exert morphological control in an environment which is suited toward higher particle yield. Two fluidic systems have been employed; a peristaltic turbulent flow mixing system suitable for the rapid production of MNP, and a macrofluidic system for the synthesis of tailored nanomaterials under a laminar flow regime. The presence of the amine additives in the turbulent flow system in initial results appears to offer similar morphological control as observed under RTCP conditions, with higher proportions of octahedral particles formed. This is a proof of concept which may pave the way to green synthesis of tailored MNP on an industrial scale. Mms6 and amine additives have been used in the macrofluidic system, with Mms6 allowing magnetite to be synthesized at unfavourable ferric ratios, but no longer influencing particle size. This suggests this synthetic technique while still benefiting from the addition of additives, may not allow additives to fully influence the particles formed due to the faster timescale of reaction. The amine additives have been tested at various concentrations, the results of which will be discussed in this paper.

Keywords: bioinspired, green synthesis, fluidic, magnetite, morphological control, scale-up

Procedia PDF Downloads 107
772 Assessment of Designed Outdoor Playspaces as Learning Environments and Its Impact on Child’s Wellbeing: A Case of Bhopal, India

Authors: Richa Raje, Anumol Antony

Abstract:

Playing is the foremost stepping stone for childhood development. Play is an essential aspect of a child’s development and learning because it creates meaningful enduring environmental connections and increases children’s performance. The children’s proficiencies are ever varying in their course of growth. There is innovation in the activities, as it kindles the senses, surges the love for exploration, overcomes linguistic barriers and physiological development, which in turn allows them to find their own caliber, spontaneity, curiosity, cognitive skills, and creativity while learning during play. This paper aims to comprehend the learning in play which is the most essential underpinning aspect of the outdoor play area. It also assesses the trend of playgrounds design that is merely hammered with equipment's. It attempts to derive a relation between the natural environment and children’s activities and the emotions/senses that can be evoked in the process. One of the major concerns with our outdoor play is that it is limited to an area with a similar kind of equipment, thus making the play highly regimented and monotonous. This problem is often lead by the strict timetables of our education system that hardly accommodates play. Due to these reasons, the play areas remain neglected both in terms of design that allows learning and wellbeing. Poorly designed spaces fail to inspire the physical, emotional, social and psychological development of the young ones. Currently, the play space has been condensed to an enclosed playground, driveway or backyard which confines the children’s capability to leap the boundaries set for him. The paper emphasizes on study related to kids ranging from 5 to 11 years where the behaviors during their interactions in a playground are mapped and analyzed. The theory of affordance is applied to various outdoor play areas, in order to study and understand the children’s environment and how variedly they perceive and use them. A higher degree of affordance shall form the basis for designing the activities suitable in play spaces. It was observed during their play that, they choose certain spaces of interest majority being natural over other artificial equipment. The activities like rolling on the ground, jumping from a height, molding earth, hiding behind tree, etc. suggest that despite equipment they have an affinity towards nature. Therefore, we as designers need to take a cue from their behavior and practices to be able to design meaningful spaces for them, so the child gets the freedom to test their precincts.

Keywords: children, landscape design, learning environment, nature and play, outdoor play

Procedia PDF Downloads 113
771 Utilization of Rice Husk Ash with Clay to Produce Lightweight Coarse Aggregates for Concrete

Authors: Shegufta Zahan, Muhammad A. Zahin, Muhammad M. Hossain, Raquib Ahsan

Abstract:

Rice Husk Ash (RHA) is one of the agricultural waste byproducts available widely in the world and contains a large amount of silica. In Bangladesh, stones cannot be used as coarse aggregate in infrastructure works as they are not available and need to be imported from abroad. As a result, bricks are mostly used as coarse aggregates in concrete as they are cheaper and easily produced here. Clay is the raw material for producing brick. Due to rapid urban growth and the industrial revolution, demand for brick is increasing, which led to a decrease in the topsoil. This study aims to produce lightweight block aggregates with sufficient strength utilizing RHA at low cost and use them as an ingredient of concrete. RHA, because of its pozzolanic behavior, can be utilized to produce better quality block aggregates at lower cost, replacing clay content in the bricks. The whole study can be divided into three parts. In the first part, characterization tests on RHA and clay were performed to determine their properties. Six different types of RHA from different mills were characterized by XRD and SEM analysis. Their fineness was determined by conducting a fineness test. The result of XRD confirmed the amorphous state of RHA. The characterization test for clay identifies the sample as “silty clay” with a specific gravity of 2.59 and 14% optimum moisture content. In the second part, blocks were produced with six different types of RHA with different combinations by volume with clay. Then mixtures were manually compacted in molds before subjecting them to oven drying at 120 °C for 7 days. After that, dried blocks were placed in a furnace at 1200 °C to produce ultimate blocks. Loss on ignition test, apparent density test, crushing strength test, efflorescence test, and absorption test were conducted on the blocks to compare their performance with the bricks. For 40% of RHA, the crushing strength result was found 60 MPa, where crushing strength for brick was observed 48.1 MPa. In the third part, the crushed blocks were used as coarse aggregate in concrete cylinders and compared them with brick concrete cylinders. Specimens were cured for 7 days and 28 days. The highest compressive strength of block cylinders for 7 days curing was calculated as 26.1 MPa, whereas, for 28 days curing, it was found 34 MPa. On the other hand, for brick cylinders, the value of compressing strength of 7 days and 28 days curing was observed as 20 MPa and 30 MPa, respectively. These research findings can help with the increasing demand for topsoil of the earth, and also turn a waste product into a valuable one.

Keywords: characterization, furnace, pozzolanic behavior, rice husk ash

Procedia PDF Downloads 100
770 Development of a Mixed-Reality Hands-Free Teleoperated Robotic Arm for Construction Applications

Authors: Damith Tennakoon, Mojgan Jadidi, Seyedreza Razavialavi

Abstract:

With recent advancements of automation in robotics, from self-driving cars to autonomous 4-legged quadrupeds, one industry that has been stagnant is the construction industry. The methodologies used in a modern-day construction site consist of arduous physical labor and the use of heavy machinery, which has not changed over the past few decades. The dangers of a modern-day construction site affect the health and safety of the workers due to performing tasks such as lifting and moving heavy objects and having to maintain unhealthy posture to complete repetitive tasks such as painting, installing drywall, and laying bricks. Further, training for heavy machinery is costly and requires a lot of time due to their complex control inputs. The main focus of this research is using immersive wearable technology and robotic arms to perform the complex and intricate skills of modern-day construction workers while alleviating the physical labor requirements to perform their day-to-day tasks. The methodology consists of mounting a stereo vision camera, the ZED Mini by Stereolabs, onto the end effector of an industrial grade robotic arm, streaming the video feed into the Virtual Reality (VR) Meta Quest 2 (Quest 2) head-mounted display (HMD). Due to the nature of stereo vision, and the similar field-of-views between the stereo camera and the Quest 2, human-vision can be replicated on the HMD. The main advantage this type of camera provides over a traditional monocular camera is it gives the user wearing the HMD a sense of the depth of the camera scene, specifically, a first-person view of the robotic arm’s end effector. Utilizing the built-in cameras of the Quest 2 HMD, open-source hand-tracking libraries from OpenXR can be implemented to track the user’s hands in real-time. A mixed-reality (XR) Unity application can be developed to localize the operator's physical hand motions with the end-effector of the robotic arm. Implementing gesture controls will enable the user to move the robotic arm and control its end-effector by moving the operator’s arm and providing gesture inputs from a distant location. Given that the end effector of the robotic arm is a gripper tool, gripping and opening the operator’s hand will translate to the gripper of the robot arm grabbing or releasing an object. This human-robot interaction approach provides many benefits within the construction industry. First, the operator’s safety will be increased substantially as they can be away from the site-location while still being able perform complex tasks such as moving heavy objects from place to place or performing repetitive tasks such as painting walls and laying bricks. The immersive interface enables precision robotic arm control and requires minimal training and knowledge of robotic arm manipulation, which lowers the cost for operator training. This human-robot interface can be extended to many applications, such as handling nuclear accident/waste cleanup, underwater repairs, deep space missions, and manufacturing and fabrication within factories. Further, the robotic arm can be mounted onto existing mobile robots to provide access to hazardous environments, including power plants, burning buildings, and high-altitude repair sites.

Keywords: construction automation, human-robot interaction, hand-tracking, mixed reality

Procedia PDF Downloads 71
769 Supply Chain Design: Criteria Considered in Decision Making Process

Authors: Lenka Krsnakova, Petr Jirsak

Abstract:

Prior research on facility location in supply chain is mostly focused on improvement of mathematical models. It is due to the fact that supply chain design has been for the long time the area of operational research that underscores mainly quantitative criteria. Qualitative criteria are still highly neglected within the supply chain design research. Facility location in the supply chain has become multi-criteria decision-making problem rather than single criteria decision due to changes of market conditions. Thus, both qualitative and quantitative criteria have to be included in the decision making process. The aim of this study is to emphasize the importance of qualitative criteria as key parameters of relevant mathematical models. We examine which criteria are taken into consideration when Czech companies decide about their facility location. A literature review on criteria being used in facility location decision making process creates a theoretical background for the study. The data collection was conducted through questionnaire survey. Questionnaire was sent to manufacturing and business companies of all sizes (small, medium and large enterprises) with the representation in the Czech Republic within following sectors: automotive, toys, clothing industry, electronics and pharmaceutical industry. Comparison of which criteria prevail in the current research and which are considered important by companies in the Czech Republic is made. Despite the number of articles focused on supply chain design, only minority of them consider qualitative criteria and rarely process supply chain design as a multi-criteria decision making problem. Preliminary results of the questionnaire survey outlines that companies in the Czech Republic see the qualitative criteria and their impact on facility location decision as crucial. Qualitative criteria as company strategy, quality of working environment or future development expectations are confirmed to be considered by Czech companies. This study confirms that the qualitative criteria can significantly influence whether a particular location could or could not be right place for a logistic facility. The research has two major limitations: researchers who focus on improving of mathematical models mostly do not mention criteria that enter the model. Czech supply chain managers selected important criteria from the group of 18 available criteria and assign them importance weights. It does not necessarily mean that these criteria were taken into consideration when the last facility location was chosen, but how they perceive that today. Since the study confirmed the necessity of future research on how qualitative criteria influence decision making process about facility location, the authors have already started in-depth interviews with participating companies to reveal how the inclusion of qualitative criteria into decision making process about facility location influence the company´s performance.

Keywords: criteria influencing facility location, Czech Republic, facility location decision-making, qualitative criteria

Procedia PDF Downloads 314
768 Health Behaviors Related to Preventing Disease of Hand Foot and Mouth Disease of Child Caregivers in Child Development Center Ubon Ratchathani Province, Thailand

Authors: Comsun Thongchai, Vorapoj Promasatayaprot

Abstract:

Background: Child development center is a day care center that gathers large numbers of children in the same areas. As a result, it provides high opportunity for infection, especially gastrointestinal and respiratory infections. Ubon Ratchathani has been a province with an increasing number of cases of Hand foot and mouth disease each year reported between 2014 and 2016. Accorded to a recent investigation reported, HFMD occurred in the Child Development Center and kindergartens, this was a place where HFMD spreads. This research was aimed to investigate the knowledge, attitude and behavior about hand foot and mouth disease preventing of child caregivers in child development centers, Ubon Ratchathani Province. Method: Descriptive study was conducted between April and July, 2017. The study instruments used questionnaires and in-depth interviews on their practices of prevention and environment management of HFMD. The samples of survey questionnaires were caregivers who are working in 160 child development centers of the 160 parishes in Ubon Ratchathani province. The data was analyzed by percentages, means and standard deviations and Pearson Product Moment Correlation Coefficient. Result: The results showed that the majority were female (96.3%), average age 41 years (68.3%), marital status were couples (85.7%) and studied in undergraduate (75.2%). with a period of performance as teachers in child development centers range from 10 to 14 years were percentage 58.7 and 71.8 percent of them had been trained by health worker about the control HFMD. The knowledge for preventive in hand foot mouth disease on child caregivers was at high level. The mean score was 2.76 (S.D. = 0.114). The attitude of child caregivers was at a moderate level. Its mean score was 2.28 (S.D. = 0.247). On the other hand, the level of environmental management to prevent HFMD was low. The mean score was 1.34 (S.D. = 0.215). The factor of personal characteristics as gender, age, educational level, duration at work, knowledge and attitude of preventive HFMD was associated with Preventive of Behaviors to a statistically significant level (p<0.05 respectively). Conclusion: These results should be concerned to develop knowledge and improving practice for preventive hand foot mouth disease of child caregivers in child development centers by training. Preparation of media education, Surveillance of hand foot mouth disease and health behaviors promotion with community participation need to be supported continuously.

Keywords: preventive behavior, child development center, hand foot mouth disease, Thailand

Procedia PDF Downloads 182
767 Closed Mitral Valvotomy: A Safe and Promising Procedure

Authors: Sushil Kumar Singh, Kumar Rahul, Vivek Tewarson, Sarvesh Kumar, Shobhit Kumar

Abstract:

Objective: Rheumatic mitral stenosis continues to be a major public health problem in developing countries. When the left atrium (LA) is unable to fill the left ventricle (LV) at normal LA pressures due to impaired relaxation and impaired compliance, diastolic dysfunction occurs. The assessment of left ventricular (LV) diastolic function and filling pressures is of clinical importance to identify underlying cardiac disease, its treatment, and to assess prognosis. 2D echocardiography can detect diastolic dysfunction with excellent sensitivity and minimal risk when compared to the gold standard of invasive pressure-volume measurements. Material and Method: This was a one-year study consisting of twenty-nine patients of isolated rheumatic severe mitral stenosis. Data was analyzed preoperative and post operative (at one month follow-up). Transthoracic 2D echocardiographic parameters of the diastolic function are transmitral flow, pulmonary venous flow, mitral annular tissue doppler, and color M-mode doppler. In our study, mitral valve orifice area, ejection fraction, deceleration time, E/A-wave, E/E’-wave, myocardial performance index of left ventricle (Tei index ), and Mitral inflow propagation velocity were included for echocardiographic evaluation. The statistical analysis was performed on SPSS Version 15.0 statistical analysis software. Result: Twenty-nine patients underwent successful closed mitral commissurotomy for isolated mitral stenosis. The outcome measures were observed pre-operatively and at one-month follow-up. The majority of patients were in NYHA grade III (69.0%) in the preoperative period, which improved to NYHA grade I (48.3%) after closed mitral commissurotomy. Post-surgery mitral valve area increased from 0.77 ± 0.13 to 2.32 ± 0.26 cm, ejection fraction increased from 61.38 ± 4.61 to 64.79 ± 3.22. There was a decrease in deceleration time from 231.55 ± 49.31 to 168.28 ± 14.30 ms, E/A ratio from 1.70 ± 0.54 from 0.89 ± 0.39, E/E’ ratio from 14.59 ± 3.34 to 8.86 ± 3.03. In addition, there was improvement in TIE index from 0.50 ± 0.03 to 0.39 ± 0.06 and mitral inflow propagation velocity from 47.28 ± 3.71 to 57.86 ± 3.19 cm/sec. In peri-operative and follow-up, there was no incidence of severe mitral regurgitation (MR). There was no thromboembolic incident and no mortality.

Keywords: closed mitral valvotomy, mitral stenosis, open mitral commissurotomy, balloon mitral valvotomy

Procedia PDF Downloads 74
766 Course Perceiving Differences among College Science Students from Various Cultures: A Case Study in the US

Authors: Yuanyuan Song

Abstract:

Background: As we all know, culture plays a pivotal role in the realm of education, influencing study perceptions and outcomes. Nevertheless, there remains a need to delve into how culture specifically impacts the perception of courses. Therefore, the impact of culture on students' perceptions and academic performance is explored in this study. Drawing from cultural constructionism and conflict theories, it is posited that when students hailing from diverse cultures and backgrounds converge in the same classroom, their perceptions of course content may diverge significantly. This study seeks to unravel the tangible disparities and ascertain how cultural nuances shape students' perceptions of classroom content when encountering diverse cultural contexts within the same learning environment. Methodology: Given the diverse cultural backgrounds of students within the US, this study draws upon data collected from a course offered by a US college. In pursuit of answers to these inquiries, a qualitative approach was employed, involving semi-structured interviews conducted in a college-level science class in the US during 2023. The interviews encompassed approximately nine questions, spanning demographic particulars, cultural backgrounds, science learning experiences, academic outcomes, and more. Participants were exclusively drawn from science-related majors, with each student originating from a distinct cultural context. All participants were undergraduates, and most of them were from eighteen to twenty-five years old, totaling six students who attended the class and willingly participated in the interviews. The duration of each interview was approximately twenty minutes. Results: The findings gleaned from the interview data underscore the notable impact of varying cultural contexts on students' perceptions. This study argues that female science students, for instance, are influenced by gender dynamics due to the predominant male presence in science majors, creating an environment where female students feel reticent about expressing themselves in public. Students of East Asian origin exhibit a stronger belief in the efficacy of personal efforts when contrasted with their North American counterparts. Minority students indicated that they grapple with integration into the predominantly white mainstream society, influencing their eagerness to engage in classroom activities that are conducted by white professors. All of them emphasized the importance of learning science.

Keywords: multiculture education, educational sociology, educational equality, STEM education

Procedia PDF Downloads 50
765 Applying the Quad Model to Estimate the Implicit Self-Esteem of Patients with Depressive Disorders: Comparing the Psychometric Properties with the Implicit Association Test Effect

Authors: Yi-Tung Lin

Abstract:

Researchers commonly assess implicit self-esteem with the Implicit Association Test (IAT). The IAT’s measure, often referred to as the IAT effect, indicates the strengths of automatic preferences for the self relative to others, which is often considered an index of implicit self-esteem. However, based on the Dual-process theory, the IAT does not rely entirely on the automatic process; it is also influenced by a controlled process. The present study, therefore, analyzed the IAT data with the Quad model, separating four processes on the IAT performance: the likelihood that automatic association is activated by the stimulus in the trial (AC); that a correct response is discriminated in the trial (D); that the automatic bias is overcome in favor of a deliberate response (OB); and that when the association is not activated, and the individual fails to discriminate a correct answer, there is a guessing or response bias drives the response (G). The AC and G processes are automatic, while the D and OB processes are controlled. The AC parameter is considered as the strength of the association activated by the stimulus, which reflects what implicit measures of social cognition aim to assess. The stronger the automatic association between self and positive valence, the more likely it will be activated by a relevant stimulus. Therefore, the AC parameter was used as the index of implicit self-esteem in the present study. Meanwhile, the relationship between implicit self-esteem and depression is not fully investigated. In the cognitive theory of depression, it is assumed that the negative self-schema is crucial in depression. Based on this point of view, implicit self-esteem would be negatively associated with depression. However, the results among empirical studies are inconsistent. The aims of the present study were to examine the psychometric properties of the AC (i.e., test-retest reliability and its correlations with explicit self-esteem and depression) and compare it with that of the IAT effect. The present study had 105 patients with depressive disorders completing the Rosenberg Self-Esteem Scale, Beck Depression Inventory-II and the IAT on the pretest. After at least 3 weeks, the participants completed the second IAT. The data were analyzed by the latent-trait multinomial processing tree model (latent-trait MPT) with the TreeBUGS package in R. The result showed that the latent-trait MPT had a satisfactory model fit. The effect size of test-retest reliability of the AC and the IAT effect were medium (r = .43, p < .0001) and small (r = .29, p < .01) respectively. Only the AC showed a significant correlation with explicit self-esteem (r = .19, p < .05). Neither of the two indexes was correlated with depression. Collectively, the AC parameter was a satisfactory index of implicit self-esteem compared with the IAT effect. Also, the present study supported the results that implicit self-esteem was not correlated with depression.

Keywords: cognitive modeling, implicit association test, implicit self-esteem, quad model

Procedia PDF Downloads 115
764 Evaluation of Vitamin D Levels in Obese and Morbid Obese Children

Authors: Orkide Donma, Mustafa M. Donma

Abstract:

Obesity may lead to growing serious health problems throughout the world. Vitamin D appears to play a role in cardiovascular and metabolic health. Vitamin D deficiency may add to derangements in human metabolic systems, particularly those of children. Childhood obesity is associated with an increased risk of chronic and sophisticated diseases. The aim of this study is to investigate associations as well as possible differences related to parameters affected by obesity and their relations with vitamin D status in obese (OB) and morbid obese (MO) children. This study included a total of 78 children. Of them, 41 and 37 were OB and MO, respectively. WHO BMI-for age percentiles were used for the classification of obesity. The values above 99 percentile were defined as MO. Those between 95 and 99 percentiles were included into OB group. Anthropometric measurements were recorded. Basal metabolic rates (BMRs) were measured. Vitamin D status is determined by the measurement of 25-hydroxy cholecalciferol [25- hydroxyvitamin D3, 25(OH)D] using high-performance liquid chromatography. Vitamin D status was evaluated as deficient, insufficient and sufficient. Values < 20.0 ng/ml, values between 20-30 ng/ml and values > 30.0 ng/ml were defined as vitamin D deficient, insufficient and sufficient, respectively. Optimal 25(OH)D level was defined as ≥ 30 ng/ml. SPSSx statistical package program was used for the evaluation of the data. The statistical significance degree was accepted as p < 0.05. Mean ages did not differ between the groups. Significantly increased body mass index (BMI), waist circumference (C) and neck C as well as significantly decreased fasting blood glucose (FBG) and vitamin D values were observed in MO group (p < 0.05). In OB group, 37.5% of the children were vitamin D deficient, and in MO group the corresponding value was 53.6%. No difference between the groups in terms of lipid profile, systolic blood pressure (SBP), diastolic blood pressure (DBP) and insulin values was noted. There was a severe statistical significance between FBG values of the groups (p < 0.001). Important correlations between BMI, waist C, hip C, neck C and both SBP as well as DBP were found in OB group. In MO group, correlations only with SBP were obtained. In a similar manner, in OB group, correlations were detected between SBP-BMR and DBP-BMR. However, in MO children, BMR correlated only with SBP. The associations of vitamin D with anthropometric indices as well as some lipid parameters were defined. In OB group BMI, waist C, hip C and triglycerides (TRG) were negatively correlated with vitamin D concentrations whereas none of them were detected in MO group. Vitamin D deficiency may contribute to the complications associated with childhood obesity. Loss of correlations between obesity indices-DBP, vitamin D-TRG, as well as relatively lower FBG values, observed in MO group point out that the emergence of MetS components starts during obesity state just before the transition to morbid obesity. Aside from its deficiency state, associations of vitamin D with anthropometric measurements, blood pressures and TRG should also be evaluated before the development of morbid obesity.

Keywords: children, morbid obesity, obesity, vitamin D

Procedia PDF Downloads 128
763 Tangible Losses, Intangible Traumas: Re-envisioning Recovery Following the Lytton Creek Fire 2021 through Place Attachment Lens

Authors: Tugba Altin

Abstract:

In an era marked by pronounced climate change consequences, communities are observed to confront traumatic events that yield both tangible and intangible repercussions. Such events not only cause discernible damage to the landscape but also deeply affect the intangible aspects, including emotional distress and disruptions to cultural landscapes. The Lytton Creek Fire of 2021 serves as a case in point. Beyond the visible destruction, the less overt but profoundly impactful disturbance to place attachment (PA) is scrutinized. PA, representing the emotional and cognitive bonds individuals establish with their environments, is crucial for understanding how such events impact cultural identity and connection to the land. The study underscores the significance of addressing both tangible and intangible traumas for holistic community recovery. As communities renegotiate their affiliations with altered environments, the cultural landscape emerges as instrumental in shaping place-based identities. This renewed understanding is pivotal for reshaping adaptation planning. The research advocates for adaptation strategies rooted in the lived experiences and testimonies of the affected populations. By incorporating both the tangible and intangible facets of trauma, planning efforts are suggested to be more culturally attuned and emotionally insightful, fostering true resonance with the affected communities. Through such a comprehensive lens, this study contributes enriching the climate change discourse, emphasizing the intertwined nature of tangible recovery and the imperative of emotional and cultural healing after environmental disasters. Following the pronounced aftermath of the Lytton Creek Fire in 2021, research aims to deeply understand its impact on place attachment (PA), encompassing the emotional and cognitive bonds individuals form with their environments. The interpretive phenomenological approach, enriched by a hermeneutic framework, is adopted, emphasizing the experiences of the Lytton community and co-researchers. Phenomenology informed the understanding of 'place' as the focal point of attachment, providing insights into its formation and evolution after traumatic events. Data collection departs from conventional methods. Instead of traditional interviews, walking audio sessions and photo elicitation methods are utilized. These allow co-researchers to immerse themselves in the environment, re-experience, and articulate memories and feelings in real-time. Walking audio facilitates reflections on spatial narratives post-trauma, while photo voices captured intangible emotions, enabling the visualization of place-based experiences. The analysis is collaborative, ensuring the co-researchers' experiences and interpretations are central. Emphasizing their agency in knowledge production, the process is rigorous, facilitated by the harmonious blend of interpretive phenomenology and hermeneutic insights. The findings underscore the need for adaptation and recovery efforts to address emotional traumas alongside tangible damages. By exploring PA post-disaster, the research not only fills a significant gap but advocates for an inclusive approach to community recovery. Furthermore, the participatory methodologies employed challenge traditional research paradigms, heralding potential shifts in qualitative research norms.

Keywords: wildfire recovery, place attachment, trauma recovery, cultural landscape, visual methodologies

Procedia PDF Downloads 68
762 Groundwater Potential Mapping using Frequency Ratio and Shannon’s Entropy Models in Lesser Himalaya Zone, Nepal

Authors: Yagya Murti Aryal, Bipin Adhikari, Pradeep Gyawali

Abstract:

The Lesser Himalaya zone of Nepal consists of thrusting and folding belts, which play an important role in the sustainable management of groundwater in the Himalayan regions. The study area is located in the Dolakha and Ramechhap Districts of Bagmati Province, Nepal. Geologically, these districts are situated in the Lesser Himalayas and partly encompass the Higher Himalayan rock sequence, which includes low-grade to high-grade metamorphic rocks. Following the Gorkha Earthquake in 2015, numerous springs dried up, and many others are currently experiencing depletion due to the distortion of the natural groundwater flow. The primary objective of this study is to identify potential groundwater areas and determine suitable sites for artificial groundwater recharge. Two distinct statistical approaches were used to develop models: The Frequency Ratio (FR) and Shannon Entropy (SE) methods. The study utilized both primary and secondary datasets and incorporated significant role and controlling factors derived from field works and literature reviews. Field data collection involved spring inventory, soil analysis, lithology assessment, and hydro-geomorphology study. Additionally, slope, aspect, drainage density, and lineament density were extracted from a Digital Elevation Model (DEM) using GIS and transformed into thematic layers. For training and validation, 114 springs were divided into a 70/30 ratio, with an equal number of non-spring pixels. After assigning weights to each class based on the two proposed models, a groundwater potential map was generated using GIS, classifying the area into five levels: very low, low, moderate, high, and very high. The model's outcome reveals that over 41% of the area falls into the low and very low potential categories, while only 30% of the area demonstrates a high probability of groundwater potential. To evaluate model performance, accuracy was assessed using the Area under the Curve (AUC). The success rate AUC values for the FR and SE methods were determined to be 78.73% and 77.09%, respectively. Additionally, the prediction rate AUC values for the FR and SE methods were calculated as 76.31% and 74.08%. The results indicate that the FR model exhibits greater prediction capability compared to the SE model in this case study.

Keywords: groundwater potential mapping, frequency ratio, Shannon’s Entropy, Lesser Himalaya Zone, sustainable groundwater management

Procedia PDF Downloads 63
761 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 95
760 A Geometric Based Hybrid Approach for Facial Feature Localization

Authors: Priya Saha, Sourav Dey Roy Jr., Debotosh Bhattacharjee, Mita Nasipuri, Barin Kumar De, Mrinal Kanti Bhowmik

Abstract:

Biometric face recognition technology (FRT) has gained a lot of attention due to its extensive variety of applications in both security and non-security perspectives. It has come into view to provide a secure solution in identification and verification of person identity. Although other biometric based methods like fingerprint scans, iris scans are available, FRT is verified as an efficient technology for its user-friendliness and contact freeness. Accurate facial feature localization plays an important role for many facial analysis applications including biometrics and emotion recognition. But, there are certain factors, which make facial feature localization a challenging task. On human face, expressions can be seen from the subtle movements of facial muscles and influenced by internal emotional states. These non-rigid facial movements cause noticeable alterations in locations of facial landmarks, their usual shapes, which sometimes create occlusions in facial feature areas making face recognition as a difficult problem. The paper proposes a new hybrid based technique for automatic landmark detection in both neutral and expressive frontal and near frontal face images. The method uses the concept of thresholding, sequential searching and other image processing techniques for locating the landmark points on the face. Also, a Graphical User Interface (GUI) based software is designed that could automatically detect 16 landmark points around eyes, nose and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Cohn Kanade database. Also, the system is tested on DeitY-TU face database which is created in the Biometrics Laboratory of Tripura University under the research project funded by Department of Electronics & Information Technology, Govt. of India. The performance of the proposed method has been done in terms of error measure and accuracy. The method has detection rate of 98.82% on JAFFE database, 91.27% on Cohn Kanade database and 93.05% on DeitY-TU database. Also, we have done comparative study of our proposed method with other techniques developed by other researchers. This paper will put into focus emotion-oriented systems through AU detection in future based on the located features.

Keywords: biometrics, face recognition, facial landmarks, image processing

Procedia PDF Downloads 400
759 Improving Student Retention: Enhancing the First Year Experience through Group Work, Research and Presentation Workshops

Authors: Eric Bates

Abstract:

Higher education is recognised as being of critical importance in Ireland and has been linked as a vital factor to national well-being. Statistics show that Ireland has one of the highest rates of higher education participation in Europe. However, student retention and progression, especially in Institutes of Technology, is becoming an issue as rates on non-completion rise. Both within Ireland and across Europe student retention is seen as a key performance indicator for higher education and with these increasing rates the Irish higher education system needs to be flexible and adapt to the situation it now faces. The author is a Programme Chair on a Level 6 full time undergraduate programme and experience to date has shown that the first year undergraduate students take some time to identify themselves as a group within the setting of a higher education institute. Despite being part of a distinct class on a specific programme some individuals can feel isolated as he or she take the first step into higher education. Such feelings can contribute to students eventually dropping out. This paper reports on an ongoing initiative that aims to accelerate the bonding experience of a distinct group of first year undergraduates on a programme which has a high rate of non-completion. This research sought to engage the students in dynamic interactions with their peers to quickly evolve a group sense of coherence. Two separate modules – a Research Module and a Communications module - delivered by the researcher were linked across two semesters. Students were allocated into random groups and each group was given a topic to be researched. There were six topics – essentially the six sub-headings on the DIT Graduate Attribute Statement. The research took place in a computer lab and students also used the library. The output from this was a document that formed part of the submission for the Research Module. In the second semester the groups then had to make a presentation of their findings where each student spoke for a minimum amount of time. Presentation workshops formed part of that module and students were given the opportunity to practice their presentation skills. These presentations were video recorded to enable feedback to be given. Although this was a small scale study preliminary results found a strong sense of coherence among this particular cohort and feedback from the students was very positive. Other findings indicate that spreading the initiative across two semesters may have been an inhibitor. Future challenges include spreading such Initiatives College wide and indeed sector wide.

Keywords: first year experience, student retention, group work, presentation workshops

Procedia PDF Downloads 221
758 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards

Authors: Nadezhda Kvatashidze, Elena Kharabadze

Abstract:

It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.

Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value

Procedia PDF Downloads 308
757 Clean Sky 2 – Project PALACE: Aeration’s Experimental Sound Velocity Investigations for High-Speed Gerotor Simulations

Authors: Benoît Mary, Thibaut Gras, Gaëtan Fagot, Yvon Goth, Ilyes Mnassri-Cetim

Abstract:

A Gerotor pump is composed of an external and internal gear with conjugate cycloidal profiles. From suction to delivery ports, the fluid is transported inside cavities formed by teeth and driven by the shaft. From a geometric and conceptional side it is worth to note that the internal gear has one tooth less than the external one. Simcenter Amesim v.16 includes a new submodel for modelling the hydraulic Gerotor pumps behavior (THCDGP0). This submodel considers leakages between teeth tips using Poiseuille and Couette flows contributions. From the 3D CAD model of the studied pump, the “CAD import” tool takes out the main geometrical characteristics and the submodel THCDGP0 computes the evolution of each cavity volume and their relative position according to the suction or delivery areas. This module, based on international publications, presents robust results up to 6 000 rpm for pressure greater than atmospheric level. For higher rotational speeds or lower pressures, oil aeration and cavitation effects are significant and highly drop the pump’s performance. The liquid used in hydraulic systems always contains some gas, which is dissolved in the liquid at high pressure and tends to be released in a free form (i.e. undissolved as bubbles) when pressure drops. In addition to gas release and dissolution, the liquid itself may vaporize due to cavitation. To model the relative density of the equivalent fluid, modified Henry’s law is applied in Simcenter Amesim v.16 to predict the fraction of undissolved gas or vapor. Three parietal pressure sensors have been set up upstream from the pump to estimate the sound speed in the oil. Analytical models have been compared with the experimental sound speed to estimate the occluded gas content. Simcenter Amesim v.16 model was supplied by these previous analyses marks which have successfully improved the simulations results up to 14 000 rpm. This work provides a sound foundation for designing the next Gerotor pump generation reaching high rotation range more than 25 000 rpm. This improved module results will be compared to tests on this new pump demonstrator.

Keywords: gerotor pump, high speed, numerical simulations, aeronautic, aeration, cavitation

Procedia PDF Downloads 125
756 Disabled Graduate Students’ Experiences and Vision of Change for Higher Education: A Participatory Action Research Study

Authors: Emily Simone Doffing, Danielle Kohfeldt

Abstract:

Disabled students are underrepresented in graduate-level degree enrollment and completion. There is limited research on disabled students' progression during the pandemic. Disabled graduate students (DGS) face unique interpersonal and institutional barriers, yet, limited research explores these barriers, buffering facilitators, and aids to academic persistence. This study adopts an asset-based, embodied disability approach using the critical pedagogy theoretical framework instead of the deficit research approach. The Participatory Action Research (PAR) paradigm, the critical pedagogy theoretical framework, and emancipatory disability research share the same purpose -creating a socially just world through reciprocal learning. This study is one of few, if not the first, to center solely on DGS’ lived understanding using a Participatory Action Research (PAR) epistemology. With a PAR paradigm, participants and investigators work as a research team democratically at every stage of the research process. PAR has individual and systemic outcomes. PAR lessens the researcher-participant power gap and elevates a marginalized community’s knowledge as expertise for local change. PAR and critical pedagogy work toward enriching everyone involved with empowerment, civic engagement, knowledge proliferation, socio-cultural reflection, skills development, and active meaning-making. The PAR process unveils the tensions between disability and graduate school in policy and practice during the pandemic. Likewise, institutional and ideological tensions influence the PAR process. This project is recruiting 10 DGS until September through purposive and snowball sampling. DGS will collectively practice praxis during four monthly focus groups in the fall 2023 semester. Participant researchers can attend a focus group or an interview, both with field notes. September will be our orientation and first monthly meeting. It will include access needs check-ins, ice breakers, consent form review, a group agreement, PAR introduction, research ethics discussion, research goals, and potential research topics. October and November will be available for meetings for dialogues about lived experiences during our collaborative data collection. Our sessions can be semi-structured with “framing questions,” which would be revised together. Field notes include observations that cannot be captured through audio. December will focus on local social action planning and dissemination. Finally, in January, there will be a post-study focus group for students' reflections on their experiences of PAR. Iterative analysis methods include transcribed audio, reflexivity, memos, thematic coding, analytic triangulation, and member checking. This research follows qualitative rigor and quality criteria: credibility, transferability, confirmability, and psychopolitical validity. Results include potential tension points, social action, individual outcomes, and recommendations for conducting PAR. Tension points have three components: dubious practices, contestable knowledge, and conflict. The dissemination of PAR recommendations will aid and encourage researchers to conduct future PAR projects with the disabled community. Identified stakeholders will be informed of DGS’ insider knowledge to drive social sustainability.

Keywords: participatory action research, graduate school, disability, higher education

Procedia PDF Downloads 49
755 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case

Authors: Lukas Reznak, Maria Reznakova

Abstract:

Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.

Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany

Procedia PDF Downloads 238