Search results for: non-linear dynamic characteristics
1394 Feasibility Study for Implementation of Geothermal Energy Technology as a Means of Thermal Energy Supply for Medium Size Community Building
Authors: Sreto Boljevic
Abstract:
Heating systems based on geothermal energy sources are becoming increasingly popular among commercial/community buildings as management of these buildings looks for a more efficient and environmentally friendly way to manage the heating system. The thermal energy supply of most European commercial/community buildings at present is provided mainly by energy extracted from natural gas. In order to reduce greenhouse gas emissions and achieve climate change targets set by the EU, restructuring in the area of thermal energy supply is essential. At present, heating and cooling account for approx... 50% of the EU primary energy supply. Due to its physical characteristics, thermal energy cannot be distributed or exchange over long distances, contrary to electricity and gas energy carriers. Compared to electricity and the gas sectors, heating remains a generally black box, with large unknowns to a researcher and policymaker. Ain literature number of documents address policies for promoting renewable energy technology to facilitate heating for residential/community/commercial buildings and assess the balance between heat supply and heat savings. Ground source heat pump (GSHP) technology has been an extremely attractive alternative to traditional electric and fossil fuel space heating equipment used to supply thermal energy for residential/community/commercial buildings. The main purpose of this paper is to create an algorithm using an analytical approach that could enable a feasibility study regarding the implementation of GSHP technology in community building with existing fossil-fueled heating systems. The main results obtained by the algorithm will enable building management and GSHP system designers to define the optimal size of the system regarding technical, environmental, and economic impacts of the system implementation, including payback period time. In addition, an algorithm is created to be utilized for a feasibility study for many different types of buildings. The algorithm is tested on a building that was built in 1930 and is used as a church located in Cork city. The heating of the building is currently provided by a 105kW gas boiler.Keywords: GSHP, greenhouse gas emission, low-enthalpy, renewable energy
Procedia PDF Downloads 2181393 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach
Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar
Abstract:
The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group
Procedia PDF Downloads 1141392 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data
Authors: Kai Warsoenke, Maik Mackiewicz
Abstract:
To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.Keywords: automotive production, machine learning, process optimization, smart tolerancing
Procedia PDF Downloads 1141391 Crafting Robust Business Model Innovation Path with Generative Artificial Intelligence in Start-up SMEs
Authors: Ignitia Motjolopane
Abstract:
Small and medium enterprises (SMEs) play an important role in economies by contributing to economic growth and employment. In the fourth industrial revolution, the convergence of technologies and the changing nature of work created pressures on economies globally. Generative artificial intelligence (AI) may support SMEs in exploring, exploiting, and transforming business models to align with their growth aspirations. SMEs' growth aspirations fall into four categories: subsistence, income, growth, and speculative. Subsistence-oriented firms focus on meeting basic financial obligations and show less motivation for business model innovation. SMEs focused on income, growth, and speculation are more likely to pursue business model innovation to support growth strategies. SMEs' strategic goals link to distinct business model innovation paths depending on whether SMEs are starting a new business, pursuing growth, or seeking profitability. Integrating generative artificial intelligence in start-up SME business model innovation enhances value creation, user-oriented innovation, and SMEs' ability to adapt to dynamic changes in the business environment. The existing literature may lack comprehensive frameworks and guidelines for effectively integrating generative AI in start-up reiterative business model innovation paths. This paper examines start-up business model innovation path with generative artificial intelligence. A theoretical approach is used to examine start-up-focused SME reiterative business model innovation path with generative AI. Articulating how generative AI may be used to support SMEs to systematically and cyclically build the business model covering most or all business model components and analyse and test the BM's viability throughout the process. As such, the paper explores generative AI usage in market exploration. Moreover, market exploration poses unique challenges for start-ups compared to established companies due to a lack of extensive customer data, sales history, and market knowledge. Furthermore, the paper examines the use of generative AI in developing and testing viable value propositions and business models. In addition, the paper looks into identifying and selecting partners with generative AI support. Selecting the right partners is crucial for start-ups and may significantly impact success. The paper will examine generative AI usage in choosing the right information technology, funding process, revenue model determination, and stress testing business models. Stress testing business models validate strong and weak points by applying scenarios and evaluating the robustness of individual business model components and the interrelation between components. Thus, the stress testing business model may address these uncertainties, as misalignment between an organisation and its environment has been recognised as the leading cause of company failure. Generative AI may be used to generate business model stress-testing scenarios. The paper is expected to make a theoretical and practical contribution to theory and approaches in crafting a robust business model innovation path with generative artificial intelligence in start-up SMEs.Keywords: business models, innovation, generative AI, small medium enterprises
Procedia PDF Downloads 701390 Predicting Mass-School-Shootings: Relevance of the FBI’s ‘Threat Assessment Perspective’ Two Decades Later
Authors: Frazer G. Thompson
Abstract:
The 1990s in America ended with a mass-school-shooting (at least four killed by gunfire excluding the perpetrator(s)) at Columbine High School in Littleton, Colorado. Post-event, many demanded that government and civilian experts develop a ‘profile’ of the potential school shooter in order to identify and preempt likely future acts of violence. This grounded theory research study seeks to explore the validity of the original hypotheses proposed by the Federal Bureau of Investigation (FBI) in 2000, as it relates to the commonality of disclosure by perpetrators of mass-school-shootings, by evaluating fourteen mass-school-shooting events between 2000 and 2019 at locations around the United States. Methods: The strategy of inquiry seeks to investigate case files, public records, witness accounts, and available psychological profiles of the shooter. The research methodology is inclusive of one-on-one interviews with members of the FBI’s Critical Incident Response Group seeking perspective on commonalities between individuals; specifically, disclosure of intent pre-event. Results: The research determined that school shooters do not ‘unfailingly’ notify others of their plans. However, in nine of the fourteen mass-school-shooting events analyzed, the perpetrator did inform the third party of their intent pre-event in some form of written, oral, or electronic communication. In the remaining five instances, the so-called ‘red-flag’ indicators of the potential for an event to occur were profound, and unto themselves, might be interpreted as notification to others of an imminent deadly threat. Conclusion: Data indicates that conclusions drawn in the FBI’s threat assessment perspective published in 2000 are relevant and current. There is evidence that despite potential ‘red-flag’ indicators which may or may not include a variety of other characteristics, perpetrators of mass-school-shooting events are likely to share their intentions with others through some form of direct or indirect communication. More significantly, implications of this research might suggest that society is often informed of potential danger pre-event but lacks any equitable means by which to disseminate, prevent, intervene, or otherwise act in a meaningful way considering said revelation.Keywords: columbine, FBI profiling, guns, mass shooting, mental health, school violence
Procedia PDF Downloads 1181389 Yield and Physiological Evaluation of Coffee (Coffea arabica L.) in Response to Biochar Applications
Authors: Alefsi D. Sanchez-Reinoso, Leonardo Lombardini, Hermann Restrepo
Abstract:
Colombian coffee is recognized worldwide for its mild flavor and aroma. Its cultivation generates a large amount of waste, such as fresh pulp, which leads to environmental, health, and economic problems. Obtaining biochar (BC) by pyrolysis of coffee pulp and its incorporation to the soil can be a complement to the crop mineral nutrition. The objective was to evaluate the effect of the application of BC obtained from coffee pulp on the physiology and agronomic performance of the Castillo variety coffee crop (Coffea arabica L.). The research was developed in field condition experiment, using a three-year-old commercial coffee crop, carried out in Tolima. Four doses of BC (0, 4, 8 and 16 t ha-1) and four levels of chemical fertilization (CF) (0%, 33%, 66% and 100% of the nutritional requirements) were evaluated. Three groups of variables were recorded during the experiment: i) physiological parameters such as Gas exchange, the maximum quantum yield of PSII (Fv/Fm), biomass, and water status were measured; ii) physical and chemical characteristics of the soil in a commercial coffee crop, and iii) physiochemical and sensorial parameters of roasted beans and coffee beverages. The results indicated that a positive effect was found in plants with 8 t ha-1 BC and fertilization levels of 66 and 100%. Also, a positive effect was observed in coffee trees treated with 8 t ha-1 BC and 100%. In addition, the application of 16 t ha-1 BC increased the soil pHand microbial respiration; reduced the apparent density and state of aggregation of the soil compared to 0 t ha-1 BC. Applications of 8 and 16 t ha-1 BC and 66%-100% chemical fertilization registered greater sensitivity to the aromatic compounds of roasted coffee beans in the electronic nose. Amendments of BC between 8 and 16 t ha-1 and CF between 66% and 100% increased the content of total soluble solids (TSS), reduced the pH, and increased the titratable acidity in beverages of roasted coffee beans. In conclusion, 8 t ha-1 BC of the coffee pulp can be an alternative to supplement the nutrition of coffee seedlings and trees. Applications between 8 and 16 t ha-1 BC support coffee soil management strategies and help the use of solid waste. BC as a complement to chemical fertilization showed a positive effect on the aromatic profile obtained for roasted coffee beans and cup quality attributes.Keywords: crop yield, cup quality, mineral nutrition, pyrolysis, soil amendment
Procedia PDF Downloads 1081388 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.Keywords: classification, achine learning, predictive quality, feature selection
Procedia PDF Downloads 1611387 Determination of the Walkability Comfort for Urban Green Space Using Geographical Information System
Authors: Muge Unal, Cengiz Uslu, Mehmet Faruk Altunkasa
Abstract:
Walkability relates to the ability of the places to connect people with varied destinations within a reasonable amount of time and effort, and to offer visual interest in journeys throughout the network. So, the good quality of the physical environment and arrangement of walkway and sidewalk appear to be more crucial in influencing the pedestrian route choice. Also, proximity, connectivity, and accessibility are significant factor for walkability in terms of an equal opportunity for using public spaces. As a result, there are two important points for walkability. Firstly, the place should have a well-planned street network for accessible and secondly facilitate the pedestrian need for comfort. In this respect, this study aims to examine the both physical and bioclimatic comfort levels of the current condition of pedestrian route with reference to design criteria of a street to access the urban green spaces. These aspects have been identified as the main indicators for walkable streets such as continuity, materials, slope, bioclimatic condition, walkway width, greenery, and surface. Additionally, the aim was to identify the factors that need to be considered in future guidelines and policies for planning and design in urban spaces especially streets. Adana city was chosen as a study area. Adana is a province of Turkey located in south-central Anatolia. This study workflow can be summarized in four stages: (1) environmental and physical data were collected by referred to literature and used in a weighted criteria method to determine the importance level of these data , (2) environmental characteristics of pedestrian routes gained from survey studies are evaluated to hierarchies these criteria of the collected information, (3) and then each pedestrian routes will have a score that provides comfortable access to the park, (4) finally, the comfortable routes to park will be mapped using GIS. It is hoped that this study will provide an insight into future development planning and design to create a friendly and more comfort street environment for the users.Keywords: comfort level, geographical information system (GIS), walkability, weighted criteria method
Procedia PDF Downloads 3091386 DEEPMOTILE: Motility Analysis of Human Spermatozoa Using Deep Learning in Sri Lankan Population
Authors: Chamika Chiran Perera, Dananjaya Perera, Chirath Dasanayake, Banuka Athuraliya
Abstract:
Male infertility is a major problem in the world, and it is a neglected and sensitive health issue in Sri Lanka. It can be determined by analyzing human semen samples. Sperm motility is one of many factors that can evaluate male’s fertility potential. In Sri Lanka, this analysis is performed manually. Manual methods are time consuming and depend on the person, but they are reliable and it can depend on the expert. Machine learning and deep learning technologies are currently being investigated to automate the spermatozoa motility analysis, and these methods are unreliable. These automatic methods tend to produce false positive results and false detection. Current automatic methods support different techniques, and some of them are very expensive. Due to the geographical variance in spermatozoa characteristics, current automatic methods are not reliable for motility analysis in Sri Lanka. The suggested system, DeepMotile, is to explore a method to analyze motility of human spermatozoa automatically and present it to the andrology laboratories to overcome current issues. DeepMotile is a novel deep learning method for analyzing spermatozoa motility parameters in the Sri Lankan population. To implement the current approach, Sri Lanka patient data were collected anonymously as a dataset, and glass slides were used as a low-cost technique to analyze semen samples. Current problem was identified as microscopic object detection and tackling the problem. YOLOv5 was customized and used as the object detector, and it achieved 94 % mAP (mean average precision), 86% Precision, and 90% Recall with the gathered dataset. StrongSORT was used as the object tracker, and it was validated with andrology experts due to the unavailability of annotated ground truth data. Furthermore, this research has identified many potential ways for further investigation, and andrology experts can use this system to analyze motility parameters with realistic accuracy.Keywords: computer vision, deep learning, convolutional neural networks, multi-target tracking, microscopic object detection and tracking, male infertility detection, motility analysis of human spermatozoa
Procedia PDF Downloads 1051385 The Role of User Participation on Social Sustainability: A Case Study on Four Residential Areas
Authors: Hasan Taştan, Ayşen Ciravoğlu
Abstract:
The rapid growth of the human population and the environmental degradation associated with increased consumption of resources raises concerns on sustainability. Social sustainability constitutes one of the three dimensions of sustainability together with environmental and economic dimensions. Even though there is not an agreement on what social sustainability consists of, it is a well known fact that it necessitates user participation. The fore, this study aims to observe and analyze the role of user participation on social sustainability. In this paper, the links between user participation and indicators of social sustainability have been searched. In order to achieve this, first of all a literature review on social sustainability has been done; accordingly, the information obtained from researches has been used in the evaluation of the projects conducted in the developing countries considering user participation. These examples are taken as role models with pros and cons for the development of the checklist for the evaluation of the case studies. Furthermore, a case study over the post earthquake residential settlements in Turkey have been conducted. The case study projects are selected considering different building scales (differing number of residential units), scale of the problem (post-earthquake settlements, rehabilitation of shanty dwellings) and the variety of users (differing socio-economic dimensions). Decisionmaking, design, building and usage processes of the selected projects and actors of these processes have been investigated in the context of social sustainability. The cases include: New Gourna Village by Hassan Fathy, Quinta Monroy dwelling units conducted in Chile by Alejandro Aravena and Beyköy and Beriköy projects in Turkey aiming to solve the problem of housing which have appeared after the earthquake happened in 1999 have been investigated. Results of the study possible links between social sustainability indicators and user participation and links between user participation and the peculiarities of place. Results are compared and discussed in order to find possible solutions to form social sustainability through user participation. Results show that social sustainability issues depend on communities' characteristics, socio-economic conditions and user profile but user participation has positive effects on some social sustainability indicators like user satisfaction, a sense of belonging and social stability.Keywords: housing projects, residential areas, social sustainability, user participation
Procedia PDF Downloads 3891384 Nurturing Scientific Minds: Enhancing Scientific Thinking in Children (Ages 5-9) through Experiential Learning in Kids Science Labs (STEM)
Authors: Aliya K. Salahova
Abstract:
Scientific thinking, characterized by purposeful knowledge-seeking and the harmonization of theory and facts, holds a crucial role in preparing young minds for an increasingly complex and technologically advanced world. This abstract presents a research study aimed at fostering scientific thinking in early childhood, focusing on children aged 5 to 9 years, through experiential learning in Kids Science Labs (STEM). The study utilized a longitudinal exploration design, spanning 240 weeks from September 2018 to April 2023, to evaluate the effectiveness of the Kids Science Labs program in developing scientific thinking skills. Participants in the research comprised 72 children drawn from local schools and community organizations. Through a formative psychology-pedagogical experiment, the experimental group engaged in weekly STEM activities carefully designed to stimulate scientific thinking, while the control group participated in daily art classes for comparison. To assess the scientific thinking abilities of the participants, a registration table with evaluation criteria was developed. This table included indicators such as depth of questioning, resource utilization in research, logical reasoning in hypotheses, procedural accuracy in experiments, and reflection on research processes. The data analysis revealed dynamic fluctuations in the number of children at different levels of scientific thinking proficiency. While the development was not uniform across all participants, a main leading factor emerged, indicating that the Kids Science Labs program and formative experiment exerted a positive impact on enhancing scientific thinking skills in children within this age range. The study's findings support the hypothesis that systematic implementation of STEM activities effectively promotes and nurtures scientific thinking in children aged 5-9 years. Enriching education with a specially planned STEM program, tailoring scientific activities to children's psychological development, and implementing well-planned diagnostic and corrective measures emerged as essential pedagogical conditions for enhancing scientific thinking abilities in this age group. The results highlight the significant and positive impact of the systematic-activity approach in developing scientific thinking, leading to notable progress and growth in children's scientific thinking abilities over time. These findings have promising implications for educators and researchers, emphasizing the importance of incorporating STEM activities into educational curricula to foster scientific thinking from an early age. This study contributes valuable insights to the field of science education and underscores the potential of STEM-based interventions in shaping the future scientific minds of young children.Keywords: Scientific thinking, education, STEM, intervention, Psychology, Pedagogy, collaborative learning, longitudinal study
Procedia PDF Downloads 611383 Web Map Service for Fragmentary Rockfall Inventory
Authors: M. Amparo Nunez-Andres, Nieves Lantada
Abstract:
One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.Keywords: geological risk, web mapping, WMS, rockfalls
Procedia PDF Downloads 1591382 Role of Internal and External Factors in Preventing Risky Sexual Behavior, Drug and Alcohol Abuse
Authors: Veronika Sharok
Abstract:
Research relevance on psychological determinants of risky behaviors is caused by high prevalence of such behaviors, particularly among youth. Risky sexual behavior, including unprotected and casual sex, frequent change of sexual partners, drug and alcohol use lead to negative social consequences and contribute to the spread of HIV infection and other sexually transmitted diseases. Data were obtained from 302 respondents aged 15-35 which were divided into 3 empirical groups: persons prone to risky sexual behavior, drug users and alcohol users; and 3 control groups: the individuals who are not prone to risky sexual behavior, persons who do not use drugs and the respondents who do not use alcohol. For processing, we used the following methods: Qualitative method for nominative data (Chi-squared test) and quantitative methods for metric data (student's t-test, Fisher's F-test, Pearson's r correlation test). Statistical processing was performed using Statistica 6.0 software. The study identifies two groups of factors that prevent risky behaviors. Internal factors, which include the moral and value attitudes; significance of existential values: love, life, self-actualization and search for the meaning of life; understanding independence as a responsibility for the freedom and ability to get attached to someone or something up to a point when this relationship starts restricting the freedom and becomes vital; awareness of risky behaviors as dangerous for the person and for others; self-acknowledgement. External factors (prevent risky behaviors in case of absence of the internal ones): absence of risky behaviors among friends and relatives; socio-demographic characteristics (middle class, marital status); awareness about the negative consequences of risky behaviors; inaccessibility to psychoactive substances. These factors are common for proneness to each type of risky behavior, because it usually caused by the same reasons. It should be noted that if prevention of risky behavior is based only on elimination of external factors, it is not as effective as it may be if we pay more attention to internal factors. The results obtained in the study can be used to develop training programs and activities for prevention of risky behaviors, for using values preventing such behaviors and promoting healthy lifestyle.Keywords: existential values, prevention, psychological features, risky behavior
Procedia PDF Downloads 2541381 Pediatric Hearing Aid Use: A Study Based on Data Logging Information
Authors: Mina Salamatmanesh, Elizabeth Fitzpatrick, Tim Ramsay, Josee Lagacé, Lindsey Sikora, JoAnne Whittingham
Abstract:
Introduction: Hearing loss (HL) is one of the most common disorders that presents at birth and in early childhood. Universal newborn hearing screening (UNHS) has been adopted based on the assumption that with early identification of HL, children will have access to optimal amplification and intervention at younger ages, therefore, taking advantage of the brain’s maximal plasticity. One particular challenge for parents in the early years is achieving consistent hearing aid (HA) use which is critical to the child’s development and constitutes the first step in the rehabilitation process. This study examined the consistency of hearing aid use in young children based on data logging information documented during audiology sessions in the first three years after hearing aid fitting. Methodology: The first 100 children who were diagnosed with bilateral HL before 72 months of age since 2003 to 2015 in a pediatric audiology clinic and who had at least two hearing aid follow-up sessions with available data logging information were included in the study. Data from each audiology session (age of child at the session, average hours of use per day (for each ear) in the first three years after HA fitting) were collected. Clinical characteristics (degree of hearing loss, age of HA fitting) were also documented to further understanding of factors that impact HA use. Results: Preliminary analysis of the results of the first 20 children shows that all of them (100%) have at least one data logging session recorded in the clinical audiology system (Noah). Of the 20 children, 17(85%) have three data logging events recorded in the first three years after HA fitting. Based on the statistical analysis of the first 20 cases, the median hours of use in the first follow-up session after the hearing aid fitting in the right ear is 3.9 hours with an interquartile range (IQR) of 10.2h. For the left ear the median is 4.4 and the IQR is 9.7h. In the first session 47% of the children use their hearing aids ≤5 hours, 12% use them between 5 to 10 hours and 22% use them ≥10 hours a day. However, these children showed increased use by the third follow-up session with a median (IQR) of 9.1 hours for the right ear and 2.5, and of 8.2 hours for left ear (IQR) IQR is 5.6 By the third follow-up session, 14% of children used hearing aids ≤5 hours, while 38% of children used them ≥10 hours. Based on the primary results, factors like age and level of HL significantly impact the hours of use. Conclusion: The use of data logging information to assess the actual hours of HA provides an opportunity to examine the: a) challenges of families of young children with HAs, b) factors that impact use in very young children. Data logging when used collaboratively with parents, can be a powerful tool to identify problems and to encourage and assist families in maximizing their child’s hearing potential.Keywords: hearing loss, hearing aid, data logging, hours of use
Procedia PDF Downloads 2291380 The Context of Teaching and Learning Primary Science to Gifted Students: An Analysis of Australian Curriculum and New South Wales Science Syllabus
Authors: Rashedul Islam
Abstract:
A firmly-validated aim of teaching science is to support student enthusiasm for science learning with an outspread interest in scientific issues in future life. This is in keeping with the recent development in Gifted and Talented Education statement which instructs that gifted students have a renewed interest and natural aptitude in science. Yet, the practice of science teaching leaves many students with the feeling that science is difficult and compared to other school subjects, students interest in science is declining at the final years of the primary school. As a curriculum guides the teaching-learning activities in school, where significant consequences may result from the context of the curricula and syllabi, are a major feature of certain educational jurisdictions in NSW, Australia. The purpose of this study was an exploration of the curriculum sets the context to identify how science education is practiced through primary schools in Sydney, Australia. This phenomenon was explored through document review from two publicly available documents namely: the NSW Science Syllabus K-6, and Australian Curriculum: Foundation - 10 Science. To analyse the data, this qualitative study applied themed content analysis at three different levels, i.e., first cycle coding, second cycle coding- pattern codes, and thematic analysis. Preliminary analysis revealed the phenomenon of teaching-learning practices drawn from eight themes under three phenomena aligned with teachers’ practices and gifted student’s learning characteristics based on Gagné’s Differentiated Model of Gifted and Talent (DMGT). From the results, it appears that, overall, the two documents are relatively well-placed in terms of identifying the context of teaching and learning primary science to gifted students. However, educators need to make themselves aware of the ways in which the curriculum needs to be adapted to meet gifted students learning needs in science. It explores the important phenomena of teaching-learning context to provide gifted students with optimal educational practices including inquiry-based learning, problem-solving, open-ended tasks, creativity in science, higher order thinking, integration, and challenges. The significance of such a study lies in its potential to schools and further research in the field of gifted education.Keywords: teaching primary science, gifted student learning, curriculum context, science syllabi, Australia
Procedia PDF Downloads 4211379 Supplementing Aerial-Roving Surveys with Autonomous Optical Cameras: A High Temporal Resolution Approach to Monitoring and Estimating Effort within a Recreational Salmon Fishery in British Columbia, Canada
Authors: Ben Morrow, Patrick O'Hara, Natalie Ban, Tunai Marques, Molly Fraser, Christopher Bone
Abstract:
Relative to commercial fisheries, recreational fisheries are often poorly understood and pose various challenges for monitoring frameworks. In British Columbia (BC), Canada, Pacific salmon are heavily targeted by recreational fishers while also being a key source of nutrient flow and crucial prey for a variety of marine and terrestrial fauna, including endangered Southern Resident killer whales (Orcinus orca). Although commercial fisheries were historically responsible for the majority of salmon retention, recreational fishing now comprises both greater effort and retention. The current monitoring scheme for recreational salmon fisheries involves aerial-roving creel surveys. However, this method has been identified as costly and having low predictive power as it is often limited to sampling fragments of fluid and temporally dynamic fisheries. This study used imagery from two shore-based autonomous cameras in a highly active recreational fishery around Sooke, BC, and evaluated their efficacy in supplementing existing aerial-roving surveys for monitoring a recreational salmon fishery. This study involved continuous monitoring and high temporal resolution (over one million images analyzed in a single fishing season), using a deep learning-based vessel detection algorithm and a custom image annotation tool to efficiently thin datasets. This allowed for the quantification of peak-season effort from a busy harbour, species-specific retention estimates, high levels of detected fishing events at a nearby popular fishing location, as well as the proportion of the fishery management area represented by cameras. Then, this study demonstrated how it could substantially enhance the temporal resolution of a fishery through diel activity pattern analyses, scaled monthly to visualize clusters of activity. This work also highlighted considerable off-season fishing detection, currently unaccounted for in the existing monitoring framework. These results demonstrate several distinct applications of autonomous cameras for providing enhanced detail currently unavailable in the current monitoring framework, each of which has important considerations for the managerial allocation of resources. Further, the approach and methodology can benefit other studies that apply shore-based camera monitoring, supplement aerial-roving creel surveys to improve fine-scale temporal understanding, inform the optimal timing of creel surveys, and improve the predictive power of recreational stock assessments to preserve important and endangered fish species.Keywords: cameras, monitoring, recreational fishing, stock assessment
Procedia PDF Downloads 1221378 Effects of Fe Addition and Process Parameters on the Wear and Corrosion Characteristics of Icosahedral Al-Cu-Fe Coatings on Ti-6Al-4V Alloy
Authors: Olawale S. Fatoba, Stephen A. Akinlabi, Esther T. Akinlabi, Rezvan Gharehbaghi
Abstract:
The performance of material surface under wear and corrosion environments cannot be fulfilled by the conventional surface modifications and coatings. Therefore, different industrial sectors need an alternative technique for enhanced surface properties. Titanium and its alloys possess poor tribological properties which limit their use in certain industries. This paper focuses on the effect of hybrid coatings Al-Cu-Fe on a grade five titanium alloy using laser metal deposition (LMD) process. Icosahedral Al-Cu-Fe as quasicrystals is a relatively new class of materials which exhibit unusual atomic structure and useful physical and chemical properties. A 3kW continuous wave ytterbium laser system (YLS) attached to a KUKA robot which controls the movement of the cladding process was utilized for the fabrication of the coatings. The titanium cladded surfaces were investigated for its hardness, corrosion and tribological behaviour at different laser processing conditions. The samples were cut to corrosion coupons, and immersed into 3.65% NaCl solution at 28oC using Electrochemical Impedance Spectroscopy (EIS) and Linear Polarization (LP) techniques. The cross-sectional view of the samples was analysed. It was found that the geometrical properties of the deposits such as width, height and the Heat Affected Zone (HAZ) of each sample remarkably increased with increasing laser power due to the laser-material interaction. It was observed that there are higher number of aluminum and titanium presented in the formation of the composite. The indentation testing reveals that for both scanning speed of 0.8 m/min and 1m/min, the mean hardness value decreases with increasing laser power. The low coefficient of friction, excellent wear resistance and high microhardness were attributed to the formation of hard intermetallic compounds (TiCu, Ti2Cu, Ti3Al, Al3Ti) produced through the in situ metallurgical reactions during the LMD process. The load-bearing capability of the substrate was improved due to the excellent wear resistance of the coatings. The cladded layer showed a uniform crack free surface due to optimized laser process parameters which led to the refinement of the coatings.Keywords: Al-Cu-Fe coating, corrosion, intermetallics, laser metal deposition, Ti-6Al-4V alloy, wear resistance
Procedia PDF Downloads 1771377 Characterization of Kevlar 29 for Multifunction Applications
Authors: Doaa H. Elgohary, Dina M. Hamoda, S. Yahia
Abstract:
Technical textiles refer to textile materials that are engineered and designed to have specific functionalities and performance characteristics beyond their traditional use as apparel or upholstery fabrics. These textiles are usually developed for their unique properties such as strength, durability, flame retardancy, chemical resistance, waterproofing, insulation and other special properties. The development and use of technical textiles are constantly evolving, driven by advances in materials science, manufacturing technologies and the demand for innovative solutions in various industries. Kevlar 29 is a type of aramid fiber developed by DuPont. It is a high-performance material known for its exceptional strength and resistance to impact, abrasion, and heat. Kevlar 29 belongs to the Kevlar family, which includes different types of aramid fibers. Kevlar 29 is primarily used in applications that require strength and durability, such as ballistic protection, body armor, and body armor for military and law enforcement personnel. It is also used in the aerospace and automotive industries to reinforce composite materials, as well as in various industrial applications. Two different Kevlar samples were used coated with cooper lithium silicate (CLS); ten different mechanical and physical properties (weight, thickness, tensile strength, elongation, stiffness, air permeability, puncture resistance, thermal conductivity, stiffness, and spray test) were conducted to approve its functional performance efficiency. The influence of different mechanical properties was statistically analyzed using an independent t-test with a significant difference at P-value = 0.05. The radar plot was calculated and evaluated to determine the best-performing samples. The results of the independent t-test observed that all variables were significantly affected by yarn counts except water permeability, which has no significant effect. All properties were evaluated for samples 1 and 2, a radar chart was used to determine the best attitude for samples. The radar chart area was calculated, which shows that sample 1 recorded the best performance, followed by sample 2. The surface morphology of all samples and the coating materials was determined using a scanning electron microscope (SEM), also Fourier Transform Infrared Spectroscopy Measurement for the two samples.Keywords: cooper lithium silicate, independent t-test, kevlar, technical textiles.
Procedia PDF Downloads 781376 Bio-Remediation of Lead-Contaminated Water Using Adsorbent Derived from Papaya Peel
Authors: Sahar Abbaszadeh, Sharifah Rafidah Wan Alwi, Colin Webb, Nahid Ghasemi, Ida Idayu Muhamad
Abstract:
Toxic heavy metal discharges into environment due to rapid industrialization is a serious pollution problem that has drawn global attention towards their adverse impacts on both the structure of ecological systems as well as human health. Lead as toxic and bio-accumulating elements through the food chain, is regularly entering to water bodies from discharges of industries such as plating, mining activities, battery manufacture, paint manufacture, etc. The application of conventional methods to degrease and remove Pb(II) ion from wastewater is often restricted due to technical and economic constrains. Therefore, the use of various agro-wastes as low-cost bioadsorbent is found to be attractive since they are abundantly available and cheap. In this study, activated carbon of papaya peel (AC-PP) (as locally available agricultural waste) was employed to evaluate its Pb(II) uptake capacity from single-solute solutions in sets of batch mode experiments. To assess the surface characteristics of the adsorbents, the scanning electron microscope (SEM) coupled with energy disperse X-ray (EDX), and Fourier transform infrared spectroscopy (FT-IR) analysis were utilized. The removal amount of Pb(II) was determined by atomic adsorption spectrometry (AAS). The effects of pH, contact time, the initial concentration of Pb(II) and adsorbent dosage were investigated. The pH value = 5 was observed as optimum solution pH. The optimum initial concentration of Pb(II) in the solution for AC-PP was found to be 200 mg/l where the amount of Pb(II) removed was 36.42 mg/g. At the agitating time of 2 h, the adsorption processes using 100 mg dosage of AC-PP reached equilibrium. The experimental results exhibit high capability and metal affinity of modified papaya peel waste with removal efficiency of 93.22 %. The evaluation results show that the equilibrium adsorption of Pb(II) was best expressed by Freundlich isotherm model (R2 > 0.93). The experimental results confirmed that AC-PP potentially can be employed as an alternative adsorbent for Pb(II) uptake from industrial wastewater for the design of an environmentally friendly yet economical wastewater treatment process.Keywords: activated carbon, bioadsorption, lead removal, papaya peel, wastewater treatment
Procedia PDF Downloads 2841375 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 1581374 Preliminary Report on the Assessment of the Impact of the Kinesiology Taping Application versus Placebo Taping on the Knee Joint Position Sense
Authors: Anna Hadamus, Patryk Wasowski, Anna Mosiolek, Zbigniew Wronski, Sebastian Wojtowicz, Dariusz Bialoszewski
Abstract:
Introduction: Kinesiology Taping is a very popular physiotherapy method, often used for healthy people, especially athletes, in order to stimulate the muscles and improve their performance. The aim of this study was to determine the effect of the muscle application of Kinesiology Taping on the joint position sense in active motion. Material and Methods: The study involved 50 healthy people - 30 men and 20 women, mean age was 23.2 years (range 18-30 years). The exclusion criteria were injuries and operations of the knee, which could affect the test results. The participants were divided randomly into two equal groups. The first group consisted of individuals with the applied Kinesiology Taping muscle application (KT group), whereas in the rest of the individuals placebo application from red adhesive tape was used (placebo group). Both applications were to enhance the effects of quadriceps muscle activity. Joint position sense (JPS) was evaluated in this study. Error of Active Reproduction of the Joint Position (EARJP) of the knee was measured in 45° flexion. The test was performed prior to applying the patch, with the applied application, then 24 hours after wearing, and after removing the tape. The interval between trials was not less than 30 minutes. Statistical analysis was performed using Statistica 12.0. We calculated distribution characteristics, Wilcoxon test, Friedman‘s ANOVA and Mann-Whitney U test. Results. In the KT group and the placebo group average test score of JPS before applying application KT were 3.48° and 5.16° respectively, after its application it was 4.84° and 4.88°, then after 24 hours of experiment JPS was 5.12° and 4.96°, and after application removal we measured 3.84° and 5.12° respectively. Differences over time in any of the groups were not statistically significant. There were also no significant differences between the groups. Conclusions: 1. Applying Kinesiology Taping to quadriceps muscle had no significant effect on the knee joint proprioception. Its use in order to improve sensorimitor skills seems therefore to be unreasonable. 2. No differences between applications of KT and placebo indicates that the clinical effect of stretch tape is minimal or absent. 3. The results are the basis for the continuation of prospective, randomized trials of numerous study groups.Keywords: joint position sense, kinesiology taping, kinesiotaping, knee
Procedia PDF Downloads 3351373 In vitro Study of Laser Diode Radiation Effect on the Photo-Damage of MCF-7 and MCF-10A Cell Clusters
Authors: A. Dashti, M. Eskandari, L. Farahmand, P. Parvin, A. Jafargholi
Abstract:
Breast Cancer is one of the most considerable diseases in the United States and other countries and is the second leading cause of death in women. Common breast cancer treatments would lead to adverse side effects such as loss of hair, nausea, and weakness. These complications arise because these cancer treatments damage some healthy cells while eliminating the cancer cells. In an effort to address these complications, laser radiation was utilized and tested as a targeted cancer treatment for breast cancer. In this regard, tissue engineering approaches are being employed by using an electrospun scaffold in order to facilitate the growth of breast cancer cells. Polycaprolacton (PCL) was used as a material for scaffold fabricating because of its biocompatibility, biodegradability, and supporting cell growth. The specific breast cancer cells have the ability to create a three-dimensional cell cluster due to the spontaneous accumulation of cells in the porosity of the scaffold under some specific conditions. Therefore, we are looking for a higher density of porosity and larger pore size. Fibers showed uniform diameter distribution and final scaffold had optimum characteristics with approximately 40% porosity. The images were taken by SEM and the density and the size of the porosity were determined with the Image. After scaffold preparation, it has cross-linked by glutaraldehyde. Then, it has been washed with glycine and phosphate buffer saline (PBS), in order to neutralize the residual glutaraldehyde. 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromidefor (MTT) results have represented approximately 91.13% viability of the scaffolds for cancer cells. In order to create a cluster, Michigan Cancer Foundation-7 (MCF-7, breast cancer cell line) and Michigan Cancer Foundation-10A (MCF-10A, human mammary epithelial cell line) cells were cultured on the scaffold in 24 well plate for five days. Then, we have exposed the cluster to the laser diode 808 nm radiation to investigate the effect of laser on the tumor with different power and time. Under the same conditions, cancer cells lost their viability more than the healthy ones. In conclusion, laser therapy is a viable method to destroy the target cells and has a minimum effect on the healthy tissues and cells and it can improve the other method of cancer treatments limitations.Keywords: breast cancer, electrospun scaffold, polycaprolacton, laser diode, cancer treatment
Procedia PDF Downloads 1401372 Structural Analysis of Phase Transformation and Particle Formation in Metastable Metallic Thin Films Grown by Plasma-Enhanced Atomic Layer Deposition
Authors: Pouyan Motamedi, Ken Bosnick, Ken Cadien, James Hogan
Abstract:
Growth of conformal ultrathin metal films has attracted a considerable amount of attention recently. Plasma-enhanced atomic layer deposition (PEALD) is a method capable of growing conformal thin films at low temperatures, with an exemplary control over thickness. The authors have recently reported on growth of metastable epitaxial nickel thin films via PEALD, along with a comprehensive characterization of the films and a study on the relationship between the growth parameters and the film characteristics. The goal of the current study is to use the mentioned films as a case study to investigate the temperature-activated phase transformation and agglomeration in ultrathin metallic films. For this purpose, metastable hexagonal nickel thin films were annealed using a controlled heating/cooling apparatus. The transformations in the crystal structure were observed via in-situ synchrotron x-ray diffraction. The samples were annealed to various temperatures in the range of 400-1100° C. The onset and progression of particle formation were studied in-situ via laser measurements. In addition, a four-point probe measurement tool was used to record the changes in the resistivity of the films, which is affected by phase transformation, as well as roughening and agglomeration. Thin films annealed at various temperature steps were then studied via atomic force microscopy, scanning electron microscopy and high-resolution transmission electron microscopy, in order to get a better understanding of the correlated mechanisms, through which phase transformation and particle formation occur. The results indicate that the onset of hcp-to-bcc transformation is at 400°C, while particle formations commences at 590° C. If the annealed films are quenched after transformation, but prior to agglomeration, they show a noticeable drop in resistivity. This can be attributed to the fact that the hcp films are grown epitaxially, and are under severe tensile strain, and annealing leads to relaxation of the mismatch strain. In general, the results shed light on the nature of structural transformation in nickel thin films, as well as metallic thin films, in general.Keywords: atomic layer deposition, metastable, nickel, phase transformation, thin film
Procedia PDF Downloads 3271371 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence
Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács
Abstract:
The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility
Procedia PDF Downloads 1171370 Investigation of Leishmaniasis, Babesiosis, Ehrlichiosis, Dirofilariasis, and Hepatozoonosis in Referred Dogs to Veterinary Hospitals in Tehran, 2022
Authors: Mohamad Bolandmartabe, Nafiseh Hassani, Saeed Abdi Darake, Maryam Asghari
Abstract:
Dogs are highly susceptible to diseases, nutritional problems, toxins, and parasites, with parasitic infections being common and causing hardship in their lives. Some important internal parasites include worms (such as roundworms and tapeworms) and protozoa, which can lead to anemia in dogs. Important bloodborne parasites in dogs include microfilariae and adult forms of Dirofilaria immitis, Dipetalonema reconditum, Babesia, Trypanosoma, Hepatozoon, Leishmania, Ehrlichia, and Hemobartonella. Babesia and Hemobartonella are parasites that reside inside red blood cells and cause regenerative anemia by directly destroying the red blood cells. Hepatozoon, Leishmania, and Ehrlichia are also parasites that reside within white blood cells and can infiltrate other tissues, such as the liver and lymph nodes. Since intermediate hosts are more commonly found in the open environment, the prevalence of parasites in stray and free-roaming dogs is higher compared to pet dogs. Furthermore, pet dogs are less exposed to internal and external parasites due to better care, hygiene, and being predominantly indoors. Therefore, they are less likely to be affected by them. Among the parasites, Leishmania carries significant importance as it is shared between dogs and humans, causing a dangerous disease known as visceral Leishmaniasis or kala-azar and cutaneous Leishmaniasis. Furthermore, dogs can act as reservoirs and spread the disease agent within human communities. Therefore, timely and accurate diagnosis of these diseases in dogs can be highly beneficial in preventing their occurrence in humans. In this article, we employed the Giemsa staining technique under a light microscope for the identification of bloodborne parasites in dogs. However, considering the negative impact of these parasites on the natural life of dogs, the development of chronic diseases, and the gradual loss of the animal's well-being, rapid and timely diagnosis is essential. Serological methods and PCR are available for the diagnosis of certain parasites, which have high sensitivity and desirable characteristics. Therefore, this research aims to investigate the molecular aspects of bloodborne parasites in dogs referred to veterinary hospitals in Tehran city.Keywords: leishmaniasis, babesiosis, ehrlichiosis, dirofilariasis, hepatozoonosis
Procedia PDF Downloads 981369 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City
Authors: Sultan Ahmad Azizi, Gaurang J. Joshi
Abstract:
Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport
Procedia PDF Downloads 2601368 Design and Integration of an Energy Harvesting Vibration Absorber for Rotating System
Authors: F. Infante, W. Kaal, S. Perfetto, S. Herold
Abstract:
In the last decade the demand of wireless sensors and low-power electric devices for condition monitoring in mechanical structures has been strongly increased. Networks of wireless sensors can potentially be applied in a huge variety of applications. Due to the reduction of both size and power consumption of the electric components and the increasing complexity of mechanical systems, the interest of creating dense nodes sensor networks has become very salient. Nevertheless, with the development of large sensor networks with numerous nodes, the critical problem of powering them is drawing more and more attention. Batteries are not a valid alternative for consideration regarding lifetime, size and effort in replacing them. Between possible alternative solutions for durable power sources useable in mechanical components, vibrations represent a suitable source for the amount of power required to feed a wireless sensor network. For this purpose, energy harvesting from structural vibrations has received much attention in the past few years. Suitable vibrations can be found in numerous mechanical environments including automotive moving structures, household applications, but also civil engineering structures like buildings and bridges. Similarly, a dynamic vibration absorber (DVA) is one of the most used devices to mitigate unwanted vibration of structures. This device is used to transfer the primary structural vibration to the auxiliary system. Thus, the related energy is effectively localized in the secondary less sensitive structure. Then, the additional benefit of harvesting part of the energy can be obtained by implementing dedicated components. This paper describes the design process of an energy harvesting tuned vibration absorber (EHTVA) for rotating systems using piezoelectric elements. The energy of the vibration is converted into electricity rather than dissipated. The device proposed is indeed designed to mitigate torsional vibrations as with a conventional rotational TVA, while harvesting energy as a power source for immediate use or storage. The resultant rotational multi degree of freedom (MDOF) system is initially reduced in an equivalent single degree of freedom (SDOF) system. The Den Hartog’s theory is used for evaluating the optimal mechanical parameters of the initial DVA for the SDOF systems defined. The performance of the TVA is operationally assessed and the vibration reduction at the original resonance frequency is measured. Then, the design is modified for the integration of active piezoelectric patches without detuning the TVA. In order to estimate the real power generated, a complex storage circuit is implemented. A DC-DC step-down converter is connected to the device through a rectifier to return a fixed output voltage. Introducing a big capacitor, the energy stored is measured at different frequencies. Finally, the electromechanical prototype is tested and validated achieving simultaneously reduction and harvesting functions.Keywords: energy harvesting, piezoelectricity, torsional vibration, vibration absorber
Procedia PDF Downloads 1461367 Dialectical Behavior Therapy in Managing Emotional Dysregulation, Depression, and Suicidality in Autism Spectrum Disorder Patients: A Systematic Review
Authors: Alvin Saputra, Felix Wijovi
Abstract:
Background: Adults with Autism Spectrum Disorder (ASD) often experience emotional dysregulation and heightened suicidality. Dialectical Behavior Therapy (DBT) and Radically Open DBT (RO-DBT) have shown promise in addressing these challenges, though research on their effectiveness in ASD populations remains limited. This systematic review aims to evaluate the impact of DBT and RO-DBT on emotional regulation, depression, and suicidality in adults with ASD. Methods: A systematic review was conducted by searching databases such as PubMed, PsycINFO, and Scopus for studies published on DBT and RO-DBT interventions in adults with Autism Spectrum Disorder (ASD). Inclusion criteria were peer-reviewed studies that reported on emotional regulation, suicidality, or depression outcomes. Data extraction focused on sample characteristics, intervention details, and outcome measures. Quality assessment was performed using standard systematic review criteria to ensure reliability and relevance of findings. Results: 4 studies comprising a total of 343 participants were included in this study. DBT and RO-DBT interventions demonstrated a medium effect size (Cohen's d = 0.53) in improving emotional regulation for adults with ASD, with ASD participants achieving significantly better outcomes than non-ASD individuals. RO-DBT was particularly effective in reducing maladaptive overcontrol, though high attrition and a predominantly White British sample limited generalizability. At end-of-treatment, DBT significantly reduced suicidal ideation (z = −2.24; p = 0.025) and suicide attempts (z = −3.15; p = 0.002) compared to treatment as usual (TAU), although this effect did not sustain at 12 months. Depression severity decreased with DBT (z = −1.99; p = 0.046), maintaining significance at follow-up (z = −2.46; p = 0.014). No significant effects were observed for social anxiety, and two suicides occurred in the TAU group. Conclusions: DBT and RO-DBT show potential efficacy in reducing emotional dysregulation, suicidality, and depression in adults with ASD, though the effects on suicidality may diminish over time. High dropout rates and limited sample diversity suggest further research is needed to confirm long-term benefits and improve applicability across broader populations.Keywords: dialectical behaviour therapy, emotional dysregulation, autism spectrum disorder, suicidality
Procedia PDF Downloads 41366 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets
Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu
Abstract:
Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.Keywords: GEO SAR, radar, simulation, ship
Procedia PDF Downloads 1751365 Glyco-Biosensing as a Novel Tool for Prostate Cancer Early-Stage Diagnosis
Authors: Pavel Damborsky, Martina Zamorova, Jaroslav Katrlik
Abstract:
Prostate cancer is annually the most common newly diagnosed cancer among men. An extensive number of evidence suggests that traditional serum Prostate-specific antigen (PSA) assay still suffers from a lack of sufficient specificity and sensitivity resulting in vast over-diagnosis and overtreatment. Thus, the early-stage detection of prostate cancer (PCa) plays undisputedly a critical role for successful treatment and improved quality of life. Over the last decade, particular altered glycans have been described that are associated with a range of chronic diseases, including cancer and inflammation. These glycans differences enable a distinction to be made between physiological and pathological state and suggest a valuable biosensing tool for diagnosis and follow-up purposes. Aberrant glycosylation is one of the major characteristics of disease progression. Consequently, the aim of this study was to develop a more reliable tool for early-stage PCa diagnosis employing lectins as glyco-recognition elements. Biosensor and biochip technology putting to use lectin-based glyco-profiling is one of the most promising strategies aimed at providing fast and efficient analysis of glycoproteins. The proof-of-concept experiments based on sandwich assay employing anti-PSA antibody and an aptamer as a capture molecules followed by lectin glycoprofiling were performed. We present a lectin-based biosensing assay for glycoprofiling of serum biomarker PSA using different biosensor and biochip platforms such as label-free surface plasmon resonance (SPR) and microarray with fluorescent label. The results suggest significant differences in interaction of particular lectins with PSA. The antibody-based assay is frequently associated with the sensitivity, reproducibility, and cross-reactivity issues. Aptamers provide remarkable advantages over antibodies due to the nucleic acid origin, stability and no glycosylation. All these data are further step for construction of highly selective, sensitive and reliable sensors for early-stage diagnosis. The experimental set-up also holds promise for the development of comparable assays with other glycosylated disease biomarkers.Keywords: biomarker, glycosylation, lectin, prostate cancer
Procedia PDF Downloads 404