Search results for: housing characteristics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8078

Search results for: housing characteristics

1328 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea

Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim

Abstract:

Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.

Keywords: deep learning, algae concentration, remote sensing, satellite

Procedia PDF Downloads 184
1327 LTE Modelling of a DC Arc Ignition on Cold Electrodes

Authors: O. Ojeda Mena, Y. Cressault, P. Teulet, J. P. Gonnet, D. F. N. Santos, MD. Cunha, M. S. Benilov

Abstract:

The assumption of plasma in local thermal equilibrium (LTE) is commonly used to perform electric arc simulations for industrial applications. This assumption allows to model the arc using a set of magneto-hydromagnetic equations that can be solved with a computational fluid dynamic code. However, the LTE description is only valid in the arc column, whereas in the regions close to the electrodes the plasma deviates from the LTE state. The importance of these near-electrode regions is non-trivial since they define the energy and current transfer between the arc and the electrodes. Therefore, any accurate modelling of the arc must include a good description of the arc-electrode phenomena. Due to the modelling complexity and computational cost of solving the near-electrode layers, a simplified description of the arc-electrode interaction was developed in a previous work to study a steady high-pressure arc discharge, where the near-electrode regions are introduced at the interface between arc and electrode as boundary conditions. The present work proposes a similar approach to simulate the arc ignition in a free-burning arc configuration following an LTE description of the plasma. To obtain the transient evolution of the arc characteristics, appropriate boundary conditions for both the near-cathode and the near-anode regions are used based on recent publications. The arc-cathode interaction is modeled using a non-linear surface heating approach considering the secondary electron emission. On the other hand, the interaction between the arc and the anode is taken into account by means of the heating voltage approach. From the numerical modelling, three main stages can be identified during the arc ignition. Initially, a glow discharge is observed, where the cold non-thermionic cathode is uniformly heated at its surface and the near-cathode voltage drop is in the order of a few hundred volts. Next, a spot with high temperature is formed at the cathode tip followed by a sudden decrease of the near-cathode voltage drop, marking the glow-to-arc discharge transition. During this stage, the LTE plasma also presents an important increase of the temperature in the region adjacent to the hot spot. Finally, the near-cathode voltage drop stabilizes at a few volts and both the electrode and plasma temperatures reach the steady solution. The results after some seconds are similar to those presented for thermionic cathodes.

Keywords: arc-electrode interaction, thermal plasmas, electric arc simulation, cold electrodes

Procedia PDF Downloads 125
1326 Feasibility Study for Implementation of Geothermal Energy Technology as a Means of Thermal Energy Supply for Medium Size Community Building

Authors: Sreto Boljevic

Abstract:

Heating systems based on geothermal energy sources are becoming increasingly popular among commercial/community buildings as management of these buildings looks for a more efficient and environmentally friendly way to manage the heating system. The thermal energy supply of most European commercial/community buildings at present is provided mainly by energy extracted from natural gas. In order to reduce greenhouse gas emissions and achieve climate change targets set by the EU, restructuring in the area of thermal energy supply is essential. At present, heating and cooling account for approx... 50% of the EU primary energy supply. Due to its physical characteristics, thermal energy cannot be distributed or exchange over long distances, contrary to electricity and gas energy carriers. Compared to electricity and the gas sectors, heating remains a generally black box, with large unknowns to a researcher and policymaker. Ain literature number of documents address policies for promoting renewable energy technology to facilitate heating for residential/community/commercial buildings and assess the balance between heat supply and heat savings. Ground source heat pump (GSHP) technology has been an extremely attractive alternative to traditional electric and fossil fuel space heating equipment used to supply thermal energy for residential/community/commercial buildings. The main purpose of this paper is to create an algorithm using an analytical approach that could enable a feasibility study regarding the implementation of GSHP technology in community building with existing fossil-fueled heating systems. The main results obtained by the algorithm will enable building management and GSHP system designers to define the optimal size of the system regarding technical, environmental, and economic impacts of the system implementation, including payback period time. In addition, an algorithm is created to be utilized for a feasibility study for many different types of buildings. The algorithm is tested on a building that was built in 1930 and is used as a church located in Cork city. The heating of the building is currently provided by a 105kW gas boiler.

Keywords: GSHP, greenhouse gas emission, low-enthalpy, renewable energy

Procedia PDF Downloads 221
1325 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach

Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar

Abstract:

The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.

Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group

Procedia PDF Downloads 116
1324 Predicting Mass-School-Shootings: Relevance of the FBI’s ‘Threat Assessment Perspective’ Two Decades Later

Authors: Frazer G. Thompson

Abstract:

The 1990s in America ended with a mass-school-shooting (at least four killed by gunfire excluding the perpetrator(s)) at Columbine High School in Littleton, Colorado. Post-event, many demanded that government and civilian experts develop a ‘profile’ of the potential school shooter in order to identify and preempt likely future acts of violence. This grounded theory research study seeks to explore the validity of the original hypotheses proposed by the Federal Bureau of Investigation (FBI) in 2000, as it relates to the commonality of disclosure by perpetrators of mass-school-shootings, by evaluating fourteen mass-school-shooting events between 2000 and 2019 at locations around the United States. Methods: The strategy of inquiry seeks to investigate case files, public records, witness accounts, and available psychological profiles of the shooter. The research methodology is inclusive of one-on-one interviews with members of the FBI’s Critical Incident Response Group seeking perspective on commonalities between individuals; specifically, disclosure of intent pre-event. Results: The research determined that school shooters do not ‘unfailingly’ notify others of their plans. However, in nine of the fourteen mass-school-shooting events analyzed, the perpetrator did inform the third party of their intent pre-event in some form of written, oral, or electronic communication. In the remaining five instances, the so-called ‘red-flag’ indicators of the potential for an event to occur were profound, and unto themselves, might be interpreted as notification to others of an imminent deadly threat. Conclusion: Data indicates that conclusions drawn in the FBI’s threat assessment perspective published in 2000 are relevant and current. There is evidence that despite potential ‘red-flag’ indicators which may or may not include a variety of other characteristics, perpetrators of mass-school-shooting events are likely to share their intentions with others through some form of direct or indirect communication. More significantly, implications of this research might suggest that society is often informed of potential danger pre-event but lacks any equitable means by which to disseminate, prevent, intervene, or otherwise act in a meaningful way considering said revelation.

Keywords: columbine, FBI profiling, guns, mass shooting, mental health, school violence

Procedia PDF Downloads 120
1323 Yield and Physiological Evaluation of Coffee (Coffea arabica L.) in Response to Biochar Applications

Authors: Alefsi D. Sanchez-Reinoso, Leonardo Lombardini, Hermann Restrepo

Abstract:

Colombian coffee is recognized worldwide for its mild flavor and aroma. Its cultivation generates a large amount of waste, such as fresh pulp, which leads to environmental, health, and economic problems. Obtaining biochar (BC) by pyrolysis of coffee pulp and its incorporation to the soil can be a complement to the crop mineral nutrition. The objective was to evaluate the effect of the application of BC obtained from coffee pulp on the physiology and agronomic performance of the Castillo variety coffee crop (Coffea arabica L.). The research was developed in field condition experiment, using a three-year-old commercial coffee crop, carried out in Tolima. Four doses of BC (0, 4, 8 and 16 t ha-1) and four levels of chemical fertilization (CF) (0%, 33%, 66% and 100% of the nutritional requirements) were evaluated. Three groups of variables were recorded during the experiment: i) physiological parameters such as Gas exchange, the maximum quantum yield of PSII (Fv/Fm), biomass, and water status were measured; ii) physical and chemical characteristics of the soil in a commercial coffee crop, and iii) physiochemical and sensorial parameters of roasted beans and coffee beverages. The results indicated that a positive effect was found in plants with 8 t ha-1 BC and fertilization levels of 66 and 100%. Also, a positive effect was observed in coffee trees treated with 8 t ha-1 BC and 100%. In addition, the application of 16 t ha-1 BC increased the soil pHand microbial respiration; reduced the apparent density and state of aggregation of the soil compared to 0 t ha-1 BC. Applications of 8 and 16 t ha-1 BC and 66%-100% chemical fertilization registered greater sensitivity to the aromatic compounds of roasted coffee beans in the electronic nose. Amendments of BC between 8 and 16 t ha-1 and CF between 66% and 100% increased the content of total soluble solids (TSS), reduced the pH, and increased the titratable acidity in beverages of roasted coffee beans. In conclusion, 8 t ha-1 BC of the coffee pulp can be an alternative to supplement the nutrition of coffee seedlings and trees. Applications between 8 and 16 t ha-1 BC support coffee soil management strategies and help the use of solid waste. BC as a complement to chemical fertilization showed a positive effect on the aromatic profile obtained for roasted coffee beans and cup quality attributes.

Keywords: crop yield, cup quality, mineral nutrition, pyrolysis, soil amendment

Procedia PDF Downloads 111
1322 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 162
1321 Determination of the Walkability Comfort for Urban Green Space Using Geographical Information System

Authors: Muge Unal, Cengiz Uslu, Mehmet Faruk Altunkasa

Abstract:

Walkability relates to the ability of the places to connect people with varied destinations within a reasonable amount of time and effort, and to offer visual interest in journeys throughout the network. So, the good quality of the physical environment and arrangement of walkway and sidewalk appear to be more crucial in influencing the pedestrian route choice. Also, proximity, connectivity, and accessibility are significant factor for walkability in terms of an equal opportunity for using public spaces. As a result, there are two important points for walkability. Firstly, the place should have a well-planned street network for accessible and secondly facilitate the pedestrian need for comfort. In this respect, this study aims to examine the both physical and bioclimatic comfort levels of the current condition of pedestrian route with reference to design criteria of a street to access the urban green spaces. These aspects have been identified as the main indicators for walkable streets such as continuity, materials, slope, bioclimatic condition, walkway width, greenery, and surface. Additionally, the aim was to identify the factors that need to be considered in future guidelines and policies for planning and design in urban spaces especially streets. Adana city was chosen as a study area. Adana is a province of Turkey located in south-central Anatolia. This study workflow can be summarized in four stages: (1) environmental and physical data were collected by referred to literature and used in a weighted criteria method to determine the importance level of these data , (2) environmental characteristics of pedestrian routes gained from survey studies are evaluated to hierarchies these criteria of the collected information, (3) and then each pedestrian routes will have a score that provides comfortable access to the park, (4) finally, the comfortable routes to park will be mapped using GIS. It is hoped that this study will provide an insight into future development planning and design to create a friendly and more comfort street environment for the users.

Keywords: comfort level, geographical information system (GIS), walkability, weighted criteria method

Procedia PDF Downloads 313
1320 DEEPMOTILE: Motility Analysis of Human Spermatozoa Using Deep Learning in Sri Lankan Population

Authors: Chamika Chiran Perera, Dananjaya Perera, Chirath Dasanayake, Banuka Athuraliya

Abstract:

Male infertility is a major problem in the world, and it is a neglected and sensitive health issue in Sri Lanka. It can be determined by analyzing human semen samples. Sperm motility is one of many factors that can evaluate male’s fertility potential. In Sri Lanka, this analysis is performed manually. Manual methods are time consuming and depend on the person, but they are reliable and it can depend on the expert. Machine learning and deep learning technologies are currently being investigated to automate the spermatozoa motility analysis, and these methods are unreliable. These automatic methods tend to produce false positive results and false detection. Current automatic methods support different techniques, and some of them are very expensive. Due to the geographical variance in spermatozoa characteristics, current automatic methods are not reliable for motility analysis in Sri Lanka. The suggested system, DeepMotile, is to explore a method to analyze motility of human spermatozoa automatically and present it to the andrology laboratories to overcome current issues. DeepMotile is a novel deep learning method for analyzing spermatozoa motility parameters in the Sri Lankan population. To implement the current approach, Sri Lanka patient data were collected anonymously as a dataset, and glass slides were used as a low-cost technique to analyze semen samples. Current problem was identified as microscopic object detection and tackling the problem. YOLOv5 was customized and used as the object detector, and it achieved 94 % mAP (mean average precision), 86% Precision, and 90% Recall with the gathered dataset. StrongSORT was used as the object tracker, and it was validated with andrology experts due to the unavailability of annotated ground truth data. Furthermore, this research has identified many potential ways for further investigation, and andrology experts can use this system to analyze motility parameters with realistic accuracy.

Keywords: computer vision, deep learning, convolutional neural networks, multi-target tracking, microscopic object detection and tracking, male infertility detection, motility analysis of human spermatozoa

Procedia PDF Downloads 107
1319 Web Map Service for Fragmentary Rockfall Inventory

Authors: M. Amparo Nunez-Andres, Nieves Lantada

Abstract:

One of the most harmful geological risks is rockfalls. They cause both economic lost, damaged in buildings and infrastructures, and personal ones. Therefore, in order to estimate the risk of the exposed elements, it is necessary to know the mechanism of this kind of events, since the characteristics of the rock walls, to the propagation of fragments generated by the initial detached rock mass. In the framework of the research RockModels project, several inventories of rockfalls were carried out along the northeast of the Spanish peninsula and the Mallorca island. These inventories have general information about the events, although the important fact is that they contained detailed information about fragmentation. Specifically, the IBSD (Insitu Block Size Distribution) is obtained by photogrammetry from drone or TLS (Terrestrial Laser Scanner) and the RBSD (Rock Block Size Distribution) from the volume of the fragment in the deposit measured by hand. In order to share all this information with other scientists, engineers, members of civil protection, and stakeholders, it is necessary a platform accessible from the internet and following interoperable standards. In all the process, open-software have been used: PostGIS 2.1., Geoserver, and OpenLayers library. In the first step, a spatial database was implemented to manage all the information. We have used the data specifications of INSPIRE for natural risks adding specific and detailed data about fragmentation distribution. The next step was to develop a WMS with Geoserver. A previous phase was the creation of several views in PostGIS to show the information at different scales of visualization and with different degrees of detail. In the first view, the sites are identified with a point, and basic information about the rockfall event is facilitated. In the next level of zoom, at medium scale, the convex hull of the rockfall appears with its real shape and the source of the event and fragments are represented by symbols. The queries at this level offer a major detail about the movement. Eventually, the third level shows all elements: deposit, source, and blocks, in their real size, if it is possible, and in their real localization. The last task was the publication of all information in a web mapping site (www.rockdb.upc.edu) with data classified by levels using libraries in JavaScript as OpenLayers.

Keywords: geological risk, web mapping, WMS, rockfalls

Procedia PDF Downloads 160
1318 Role of Internal and External Factors in Preventing Risky Sexual Behavior, Drug and Alcohol Abuse

Authors: Veronika Sharok

Abstract:

Research relevance on psychological determinants of risky behaviors is caused by high prevalence of such behaviors, particularly among youth. Risky sexual behavior, including unprotected and casual sex, frequent change of sexual partners, drug and alcohol use lead to negative social consequences and contribute to the spread of HIV infection and other sexually transmitted diseases. Data were obtained from 302 respondents aged 15-35 which were divided into 3 empirical groups: persons prone to risky sexual behavior, drug users and alcohol users; and 3 control groups: the individuals who are not prone to risky sexual behavior, persons who do not use drugs and the respondents who do not use alcohol. For processing, we used the following methods: Qualitative method for nominative data (Chi-squared test) and quantitative methods for metric data (student's t-test, Fisher's F-test, Pearson's r correlation test). Statistical processing was performed using Statistica 6.0 software. The study identifies two groups of factors that prevent risky behaviors. Internal factors, which include the moral and value attitudes; significance of existential values: love, life, self-actualization and search for the meaning of life; understanding independence as a responsibility for the freedom and ability to get attached to someone or something up to a point when this relationship starts restricting the freedom and becomes vital; awareness of risky behaviors as dangerous for the person and for others; self-acknowledgement. External factors (prevent risky behaviors in case of absence of the internal ones): absence of risky behaviors among friends and relatives; socio-demographic characteristics (middle class, marital status); awareness about the negative consequences of risky behaviors; inaccessibility to psychoactive substances. These factors are common for proneness to each type of risky behavior, because it usually caused by the same reasons. It should be noted that if prevention of risky behavior is based only on elimination of external factors, it is not as effective as it may be if we pay more attention to internal factors. The results obtained in the study can be used to develop training programs and activities for prevention of risky behaviors, for using values preventing such behaviors and promoting healthy lifestyle.

Keywords: existential values, prevention, psychological features, risky behavior

Procedia PDF Downloads 256
1317 Pediatric Hearing Aid Use: A Study Based on Data Logging Information

Authors: Mina Salamatmanesh, Elizabeth Fitzpatrick, Tim Ramsay, Josee Lagacé, Lindsey Sikora, JoAnne Whittingham

Abstract:

Introduction: Hearing loss (HL) is one of the most common disorders that presents at birth and in early childhood. Universal newborn hearing screening (UNHS) has been adopted based on the assumption that with early identification of HL, children will have access to optimal amplification and intervention at younger ages, therefore, taking advantage of the brain’s maximal plasticity. One particular challenge for parents in the early years is achieving consistent hearing aid (HA) use which is critical to the child’s development and constitutes the first step in the rehabilitation process. This study examined the consistency of hearing aid use in young children based on data logging information documented during audiology sessions in the first three years after hearing aid fitting. Methodology: The first 100 children who were diagnosed with bilateral HL before 72 months of age since 2003 to 2015 in a pediatric audiology clinic and who had at least two hearing aid follow-up sessions with available data logging information were included in the study. Data from each audiology session (age of child at the session, average hours of use per day (for each ear) in the first three years after HA fitting) were collected. Clinical characteristics (degree of hearing loss, age of HA fitting) were also documented to further understanding of factors that impact HA use. Results: Preliminary analysis of the results of the first 20 children shows that all of them (100%) have at least one data logging session recorded in the clinical audiology system (Noah). Of the 20 children, 17(85%) have three data logging events recorded in the first three years after HA fitting. Based on the statistical analysis of the first 20 cases, the median hours of use in the first follow-up session after the hearing aid fitting in the right ear is 3.9 hours with an interquartile range (IQR) of 10.2h. For the left ear the median is 4.4 and the IQR is 9.7h. In the first session 47% of the children use their hearing aids ≤5 hours, 12% use them between 5 to 10 hours and 22% use them ≥10 hours a day. However, these children showed increased use by the third follow-up session with a median (IQR) of 9.1 hours for the right ear and 2.5, and of 8.2 hours for left ear (IQR) IQR is 5.6 By the third follow-up session, 14% of children used hearing aids ≤5 hours, while 38% of children used them ≥10 hours. Based on the primary results, factors like age and level of HL significantly impact the hours of use. Conclusion: The use of data logging information to assess the actual hours of HA provides an opportunity to examine the: a) challenges of families of young children with HAs, b) factors that impact use in very young children. Data logging when used collaboratively with parents, can be a powerful tool to identify problems and to encourage and assist families in maximizing their child’s hearing potential.

Keywords: hearing loss, hearing aid, data logging, hours of use

Procedia PDF Downloads 230
1316 The Context of Teaching and Learning Primary Science to Gifted Students: An Analysis of Australian Curriculum and New South Wales Science Syllabus

Authors: Rashedul Islam

Abstract:

A firmly-validated aim of teaching science is to support student enthusiasm for science learning with an outspread interest in scientific issues in future life. This is in keeping with the recent development in Gifted and Talented Education statement which instructs that gifted students have a renewed interest and natural aptitude in science. Yet, the practice of science teaching leaves many students with the feeling that science is difficult and compared to other school subjects, students interest in science is declining at the final years of the primary school. As a curriculum guides the teaching-learning activities in school, where significant consequences may result from the context of the curricula and syllabi, are a major feature of certain educational jurisdictions in NSW, Australia. The purpose of this study was an exploration of the curriculum sets the context to identify how science education is practiced through primary schools in Sydney, Australia. This phenomenon was explored through document review from two publicly available documents namely: the NSW Science Syllabus K-6, and Australian Curriculum: Foundation - 10 Science. To analyse the data, this qualitative study applied themed content analysis at three different levels, i.e., first cycle coding, second cycle coding- pattern codes, and thematic analysis. Preliminary analysis revealed the phenomenon of teaching-learning practices drawn from eight themes under three phenomena aligned with teachers’ practices and gifted student’s learning characteristics based on Gagné’s Differentiated Model of Gifted and Talent (DMGT). From the results, it appears that, overall, the two documents are relatively well-placed in terms of identifying the context of teaching and learning primary science to gifted students. However, educators need to make themselves aware of the ways in which the curriculum needs to be adapted to meet gifted students learning needs in science. It explores the important phenomena of teaching-learning context to provide gifted students with optimal educational practices including inquiry-based learning, problem-solving, open-ended tasks, creativity in science, higher order thinking, integration, and challenges. The significance of such a study lies in its potential to schools and further research in the field of gifted education.

Keywords: teaching primary science, gifted student learning, curriculum context, science syllabi, Australia

Procedia PDF Downloads 422
1315 Assessment of Hypersaline Outfalls via Computational Fluid Dynamics Simulations: A Case Study of the Gold Coast Desalination Plant Offshore Multiport Brine Diffuser

Authors: Mitchell J. Baum, Badin Gibbes, Greg Collecutt

Abstract:

This study details a three-dimensional field-scale numerical investigation conducted for the Gold Coast Desalination Plant (GCDP) offshore multiport brine diffuser. Quantitative assessment of diffuser performance with regard to trajectory, dilution and mapping of seafloor concentration distributions was conducted for 100% plant operation. The quasi-steady Computational Fluid Dynamics (CFD) simulations were performed using the Reynolds averaged Navier-Stokes equations with a k-ω shear stress transport turbulence closure scheme. The study compliments a field investigation, which measured brine plume characteristics under similar conditions. CFD models used an iterative mesh in a domain with dimensions 400 m long, 200 m wide and an average depth of 24.2 m. Acoustic Doppler current profiler measurements conducted in the companion field study exhibited considerable variability over the water column. The effect of this vertical variability on simulated discharge outcomes was examined. Seafloor slope was also accommodated into the model. Ambient currents varied predominantly in the longshore direction – perpendicular to the diffuser structure. Under these conditions, the alternating port orientation of the GCDP diffuser resulted in simultaneous subjection to co-propagating and counter-propagating ambient regimes. Results from quiescent ambient simulations suggest broad agreement with empirical scaling arguments traditionally employed in design and regulatory assessments. Simulated dynamic ambient regimes showed the influence of ambient crossflow upon jet trajectory, dilution and seafloor concentration is significant. The effect of ambient flow structure and the subsequent influence on jet dynamics is discussed, along with the implications for using these different simulation approaches to inform regulatory decisions.

Keywords: computational fluid dynamics, desalination, field-scale simulation, multiport brine diffuser, negatively buoyant jet

Procedia PDF Downloads 215
1314 Effects of Fe Addition and Process Parameters on the Wear and Corrosion Characteristics of Icosahedral Al-Cu-Fe Coatings on Ti-6Al-4V Alloy

Authors: Olawale S. Fatoba, Stephen A. Akinlabi, Esther T. Akinlabi, Rezvan Gharehbaghi

Abstract:

The performance of material surface under wear and corrosion environments cannot be fulfilled by the conventional surface modifications and coatings. Therefore, different industrial sectors need an alternative technique for enhanced surface properties. Titanium and its alloys possess poor tribological properties which limit their use in certain industries. This paper focuses on the effect of hybrid coatings Al-Cu-Fe on a grade five titanium alloy using laser metal deposition (LMD) process. Icosahedral Al-Cu-Fe as quasicrystals is a relatively new class of materials which exhibit unusual atomic structure and useful physical and chemical properties. A 3kW continuous wave ytterbium laser system (YLS) attached to a KUKA robot which controls the movement of the cladding process was utilized for the fabrication of the coatings. The titanium cladded surfaces were investigated for its hardness, corrosion and tribological behaviour at different laser processing conditions. The samples were cut to corrosion coupons, and immersed into 3.65% NaCl solution at 28oC using Electrochemical Impedance Spectroscopy (EIS) and Linear Polarization (LP) techniques. The cross-sectional view of the samples was analysed. It was found that the geometrical properties of the deposits such as width, height and the Heat Affected Zone (HAZ) of each sample remarkably increased with increasing laser power due to the laser-material interaction. It was observed that there are higher number of aluminum and titanium presented in the formation of the composite. The indentation testing reveals that for both scanning speed of 0.8 m/min and 1m/min, the mean hardness value decreases with increasing laser power. The low coefficient of friction, excellent wear resistance and high microhardness were attributed to the formation of hard intermetallic compounds (TiCu, Ti2Cu, Ti3Al, Al3Ti) produced through the in situ metallurgical reactions during the LMD process. The load-bearing capability of the substrate was improved due to the excellent wear resistance of the coatings. The cladded layer showed a uniform crack free surface due to optimized laser process parameters which led to the refinement of the coatings.

Keywords: Al-Cu-Fe coating, corrosion, intermetallics, laser metal deposition, Ti-6Al-4V alloy, wear resistance

Procedia PDF Downloads 178
1313 Characterization of Kevlar 29 for Multifunction Applications

Authors: Doaa H. Elgohary, Dina M. Hamoda, S. Yahia

Abstract:

Technical textiles refer to textile materials that are engineered and designed to have specific functionalities and performance characteristics beyond their traditional use as apparel or upholstery fabrics. These textiles are usually developed for their unique properties such as strength, durability, flame retardancy, chemical resistance, waterproofing, insulation and other special properties. The development and use of technical textiles are constantly evolving, driven by advances in materials science, manufacturing technologies and the demand for innovative solutions in various industries. Kevlar 29 is a type of aramid fiber developed by DuPont. It is a high-performance material known for its exceptional strength and resistance to impact, abrasion, and heat. Kevlar 29 belongs to the Kevlar family, which includes different types of aramid fibers. Kevlar 29 is primarily used in applications that require strength and durability, such as ballistic protection, body armor, and body armor for military and law enforcement personnel. It is also used in the aerospace and automotive industries to reinforce composite materials, as well as in various industrial applications. Two different Kevlar samples were used coated with cooper lithium silicate (CLS); ten different mechanical and physical properties (weight, thickness, tensile strength, elongation, stiffness, air permeability, puncture resistance, thermal conductivity, stiffness, and spray test) were conducted to approve its functional performance efficiency. The influence of different mechanical properties was statistically analyzed using an independent t-test with a significant difference at P-value = 0.05. The radar plot was calculated and evaluated to determine the best-performing samples. The results of the independent t-test observed that all variables were significantly affected by yarn counts except water permeability, which has no significant effect. All properties were evaluated for samples 1 and 2, a radar chart was used to determine the best attitude for samples. The radar chart area was calculated, which shows that sample 1 recorded the best performance, followed by sample 2. The surface morphology of all samples and the coating materials was determined using a scanning electron microscope (SEM), also Fourier Transform Infrared Spectroscopy Measurement for the two samples.

Keywords: cooper lithium silicate, independent t-test, kevlar, technical textiles.

Procedia PDF Downloads 81
1312 Bio-Remediation of Lead-Contaminated Water Using Adsorbent Derived from Papaya Peel

Authors: Sahar Abbaszadeh, Sharifah Rafidah Wan Alwi, Colin Webb, Nahid Ghasemi, Ida Idayu Muhamad

Abstract:

Toxic heavy metal discharges into environment due to rapid industrialization is a serious pollution problem that has drawn global attention towards their adverse impacts on both the structure of ecological systems as well as human health. Lead as toxic and bio-accumulating elements through the food chain, is regularly entering to water bodies from discharges of industries such as plating, mining activities, battery manufacture, paint manufacture, etc. The application of conventional methods to degrease and remove Pb(II) ion from wastewater is often restricted due to technical and economic constrains. Therefore, the use of various agro-wastes as low-cost bioadsorbent is found to be attractive since they are abundantly available and cheap. In this study, activated carbon of papaya peel (AC-PP) (as locally available agricultural waste) was employed to evaluate its Pb(II) uptake capacity from single-solute solutions in sets of batch mode experiments. To assess the surface characteristics of the adsorbents, the scanning electron microscope (SEM) coupled with energy disperse X-ray (EDX), and Fourier transform infrared spectroscopy (FT-IR) analysis were utilized. The removal amount of Pb(II) was determined by atomic adsorption spectrometry (AAS). The effects of pH, contact time, the initial concentration of Pb(II) and adsorbent dosage were investigated. The pH value = 5 was observed as optimum solution pH. The optimum initial concentration of Pb(II) in the solution for AC-PP was found to be 200 mg/l where the amount of Pb(II) removed was 36.42 mg/g. At the agitating time of 2 h, the adsorption processes using 100 mg dosage of AC-PP reached equilibrium. The experimental results exhibit high capability and metal affinity of modified papaya peel waste with removal efficiency of 93.22 %. The evaluation results show that the equilibrium adsorption of Pb(II) was best expressed by Freundlich isotherm model (R2 > 0.93). The experimental results confirmed that AC-PP potentially can be employed as an alternative adsorbent for Pb(II) uptake from industrial wastewater for the design of an environmentally friendly yet economical wastewater treatment process.

Keywords: activated carbon, bioadsorption, lead removal, papaya peel, wastewater treatment

Procedia PDF Downloads 287
1311 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis

Authors: H. Jung, N. Kim, B. Kang, J. Choe

Abstract:

History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.

Keywords: history matching, principal component analysis, reservoir modelling, support vector machine

Procedia PDF Downloads 160
1310 Preliminary Report on the Assessment of the Impact of the Kinesiology Taping Application versus Placebo Taping on the Knee Joint Position Sense

Authors: Anna Hadamus, Patryk Wasowski, Anna Mosiolek, Zbigniew Wronski, Sebastian Wojtowicz, Dariusz Bialoszewski

Abstract:

Introduction: Kinesiology Taping is a very popular physiotherapy method, often used for healthy people, especially athletes, in order to stimulate the muscles and improve their performance. The aim of this study was to determine the effect of the muscle application of Kinesiology Taping on the joint position sense in active motion. Material and Methods: The study involved 50 healthy people - 30 men and 20 women, mean age was 23.2 years (range 18-30 years). The exclusion criteria were injuries and operations of the knee, which could affect the test results. The participants were divided randomly into two equal groups. The first group consisted of individuals with the applied Kinesiology Taping muscle application (KT group), whereas in the rest of the individuals placebo application from red adhesive tape was used (placebo group). Both applications were to enhance the effects of quadriceps muscle activity. Joint position sense (JPS) was evaluated in this study. Error of Active Reproduction of the Joint Position (EARJP) of the knee was measured in 45° flexion. The test was performed prior to applying the patch, with the applied application, then 24 hours after wearing, and after removing the tape. The interval between trials was not less than 30 minutes. Statistical analysis was performed using Statistica 12.0. We calculated distribution characteristics, Wilcoxon test, Friedman‘s ANOVA and Mann-Whitney U test. Results. In the KT group and the placebo group average test score of JPS before applying application KT were 3.48° and 5.16° respectively, after its application it was 4.84° and 4.88°, then after 24 hours of experiment JPS was 5.12° and 4.96°, and after application removal we measured 3.84° and 5.12° respectively. Differences over time in any of the groups were not statistically significant. There were also no significant differences between the groups. Conclusions: 1. Applying Kinesiology Taping to quadriceps muscle had no significant effect on the knee joint proprioception. Its use in order to improve sensorimitor skills seems therefore to be unreasonable. 2. No differences between applications of KT and placebo indicates that the clinical effect of stretch tape is minimal or absent. 3. The results are the basis for the continuation of prospective, randomized trials of numerous study groups.

Keywords: joint position sense, kinesiology taping, kinesiotaping, knee

Procedia PDF Downloads 340
1309 In vitro Study of Laser Diode Radiation Effect on the Photo-Damage of MCF-7 and MCF-10A Cell Clusters

Authors: A. Dashti, M. Eskandari, L. Farahmand, P. Parvin, A. Jafargholi

Abstract:

Breast Cancer is one of the most considerable diseases in the United States and other countries and is the second leading cause of death in women. Common breast cancer treatments would lead to adverse side effects such as loss of hair, nausea, and weakness. These complications arise because these cancer treatments damage some healthy cells while eliminating the cancer cells. In an effort to address these complications, laser radiation was utilized and tested as a targeted cancer treatment for breast cancer. In this regard, tissue engineering approaches are being employed by using an electrospun scaffold in order to facilitate the growth of breast cancer cells. Polycaprolacton (PCL) was used as a material for scaffold fabricating because of its biocompatibility, biodegradability, and supporting cell growth. The specific breast cancer cells have the ability to create a three-dimensional cell cluster due to the spontaneous accumulation of cells in the porosity of the scaffold under some specific conditions. Therefore, we are looking for a higher density of porosity and larger pore size. Fibers showed uniform diameter distribution and final scaffold had optimum characteristics with approximately 40% porosity. The images were taken by SEM and the density and the size of the porosity were determined with the Image. After scaffold preparation, it has cross-linked by glutaraldehyde. Then, it has been washed with glycine and phosphate buffer saline (PBS), in order to neutralize the residual glutaraldehyde. 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromidefor (MTT) results have represented approximately 91.13% viability of the scaffolds for cancer cells. In order to create a cluster, Michigan Cancer Foundation-7 (MCF-7, breast cancer cell line) and Michigan Cancer Foundation-10A (MCF-10A, human mammary epithelial cell line) cells were cultured on the scaffold in 24 well plate for five days. Then, we have exposed the cluster to the laser diode 808 nm radiation to investigate the effect of laser on the tumor with different power and time. Under the same conditions, cancer cells lost their viability more than the healthy ones. In conclusion, laser therapy is a viable method to destroy the target cells and has a minimum effect on the healthy tissues and cells and it can improve the other method of cancer treatments limitations.

Keywords: breast cancer, electrospun scaffold, polycaprolacton, laser diode, cancer treatment

Procedia PDF Downloads 144
1308 Structural Analysis of Phase Transformation and Particle Formation in Metastable Metallic Thin Films Grown by Plasma-Enhanced Atomic Layer Deposition

Authors: Pouyan Motamedi, Ken Bosnick, Ken Cadien, James Hogan

Abstract:

Growth of conformal ultrathin metal films has attracted a considerable amount of attention recently. Plasma-enhanced atomic layer deposition (PEALD) is a method capable of growing conformal thin films at low temperatures, with an exemplary control over thickness. The authors have recently reported on growth of metastable epitaxial nickel thin films via PEALD, along with a comprehensive characterization of the films and a study on the relationship between the growth parameters and the film characteristics. The goal of the current study is to use the mentioned films as a case study to investigate the temperature-activated phase transformation and agglomeration in ultrathin metallic films. For this purpose, metastable hexagonal nickel thin films were annealed using a controlled heating/cooling apparatus. The transformations in the crystal structure were observed via in-situ synchrotron x-ray diffraction. The samples were annealed to various temperatures in the range of 400-1100° C. The onset and progression of particle formation were studied in-situ via laser measurements. In addition, a four-point probe measurement tool was used to record the changes in the resistivity of the films, which is affected by phase transformation, as well as roughening and agglomeration. Thin films annealed at various temperature steps were then studied via atomic force microscopy, scanning electron microscopy and high-resolution transmission electron microscopy, in order to get a better understanding of the correlated mechanisms, through which phase transformation and particle formation occur. The results indicate that the onset of hcp-to-bcc transformation is at 400°C, while particle formations commences at 590° C. If the annealed films are quenched after transformation, but prior to agglomeration, they show a noticeable drop in resistivity. This can be attributed to the fact that the hcp films are grown epitaxially, and are under severe tensile strain, and annealing leads to relaxation of the mismatch strain. In general, the results shed light on the nature of structural transformation in nickel thin films, as well as metallic thin films, in general.

Keywords: atomic layer deposition, metastable, nickel, phase transformation, thin film

Procedia PDF Downloads 329
1307 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 120
1306 Investigation of Leishmaniasis, Babesiosis, Ehrlichiosis, Dirofilariasis, and Hepatozoonosis in Referred Dogs to Veterinary Hospitals in Tehran, 2022

Authors: Mohamad Bolandmartabe, Nafiseh Hassani, Saeed Abdi Darake, Maryam Asghari

Abstract:

Dogs are highly susceptible to diseases, nutritional problems, toxins, and parasites, with parasitic infections being common and causing hardship in their lives. Some important internal parasites include worms (such as roundworms and tapeworms) and protozoa, which can lead to anemia in dogs. Important bloodborne parasites in dogs include microfilariae and adult forms of Dirofilaria immitis, Dipetalonema reconditum, Babesia, Trypanosoma, Hepatozoon, Leishmania, Ehrlichia, and Hemobartonella. Babesia and Hemobartonella are parasites that reside inside red blood cells and cause regenerative anemia by directly destroying the red blood cells. Hepatozoon, Leishmania, and Ehrlichia are also parasites that reside within white blood cells and can infiltrate other tissues, such as the liver and lymph nodes. Since intermediate hosts are more commonly found in the open environment, the prevalence of parasites in stray and free-roaming dogs is higher compared to pet dogs. Furthermore, pet dogs are less exposed to internal and external parasites due to better care, hygiene, and being predominantly indoors. Therefore, they are less likely to be affected by them. Among the parasites, Leishmania carries significant importance as it is shared between dogs and humans, causing a dangerous disease known as visceral Leishmaniasis or kala-azar and cutaneous Leishmaniasis. Furthermore, dogs can act as reservoirs and spread the disease agent within human communities. Therefore, timely and accurate diagnosis of these diseases in dogs can be highly beneficial in preventing their occurrence in humans. In this article, we employed the Giemsa staining technique under a light microscope for the identification of bloodborne parasites in dogs. However, considering the negative impact of these parasites on the natural life of dogs, the development of chronic diseases, and the gradual loss of the animal's well-being, rapid and timely diagnosis is essential. Serological methods and PCR are available for the diagnosis of certain parasites, which have high sensitivity and desirable characteristics. Therefore, this research aims to investigate the molecular aspects of bloodborne parasites in dogs referred to veterinary hospitals in Tehran city.

Keywords: leishmaniasis, babesiosis, ehrlichiosis, dirofilariasis, hepatozoonosis

Procedia PDF Downloads 102
1305 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City

Authors: Sultan Ahmad Azizi, Gaurang J. Joshi

Abstract:

Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.

Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport

Procedia PDF Downloads 262
1304 Dialectical Behavior Therapy in Managing Emotional Dysregulation, Depression, and Suicidality in Autism Spectrum Disorder Patients: A Systematic Review

Authors: Alvin Saputra, Felix Wijovi

Abstract:

Background: Adults with Autism Spectrum Disorder (ASD) often experience emotional dysregulation and heightened suicidality. Dialectical Behavior Therapy (DBT) and Radically Open DBT (RO-DBT) have shown promise in addressing these challenges, though research on their effectiveness in ASD populations remains limited. This systematic review aims to evaluate the impact of DBT and RO-DBT on emotional regulation, depression, and suicidality in adults with ASD. Methods: A systematic review was conducted by searching databases such as PubMed, PsycINFO, and Scopus for studies published on DBT and RO-DBT interventions in adults with Autism Spectrum Disorder (ASD). Inclusion criteria were peer-reviewed studies that reported on emotional regulation, suicidality, or depression outcomes. Data extraction focused on sample characteristics, intervention details, and outcome measures. Quality assessment was performed using standard systematic review criteria to ensure reliability and relevance of findings. Results: 4 studies comprising a total of 343 participants were included in this study. DBT and RO-DBT interventions demonstrated a medium effect size (Cohen's d = 0.53) in improving emotional regulation for adults with ASD, with ASD participants achieving significantly better outcomes than non-ASD individuals. RO-DBT was particularly effective in reducing maladaptive overcontrol, though high attrition and a predominantly White British sample limited generalizability. At end-of-treatment, DBT significantly reduced suicidal ideation (z = −2.24; p = 0.025) and suicide attempts (z = −3.15; p = 0.002) compared to treatment as usual (TAU), although this effect did not sustain at 12 months. Depression severity decreased with DBT (z = −1.99; p = 0.046), maintaining significance at follow-up (z = −2.46; p = 0.014). No significant effects were observed for social anxiety, and two suicides occurred in the TAU group. Conclusions: DBT and RO-DBT show potential efficacy in reducing emotional dysregulation, suicidality, and depression in adults with ASD, though the effects on suicidality may diminish over time. High dropout rates and limited sample diversity suggest further research is needed to confirm long-term benefits and improve applicability across broader populations.

Keywords: dialectical behaviour therapy, emotional dysregulation, autism spectrum disorder, suicidality

Procedia PDF Downloads 10
1303 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 178
1302 Glyco-Biosensing as a Novel Tool for Prostate Cancer Early-Stage Diagnosis

Authors: Pavel Damborsky, Martina Zamorova, Jaroslav Katrlik

Abstract:

Prostate cancer is annually the most common newly diagnosed cancer among men. An extensive number of evidence suggests that traditional serum Prostate-specific antigen (PSA) assay still suffers from a lack of sufficient specificity and sensitivity resulting in vast over-diagnosis and overtreatment. Thus, the early-stage detection of prostate cancer (PCa) plays undisputedly a critical role for successful treatment and improved quality of life. Over the last decade, particular altered glycans have been described that are associated with a range of chronic diseases, including cancer and inflammation. These glycans differences enable a distinction to be made between physiological and pathological state and suggest a valuable biosensing tool for diagnosis and follow-up purposes. Aberrant glycosylation is one of the major characteristics of disease progression. Consequently, the aim of this study was to develop a more reliable tool for early-stage PCa diagnosis employing lectins as glyco-recognition elements. Biosensor and biochip technology putting to use lectin-based glyco-profiling is one of the most promising strategies aimed at providing fast and efficient analysis of glycoproteins. The proof-of-concept experiments based on sandwich assay employing anti-PSA antibody and an aptamer as a capture molecules followed by lectin glycoprofiling were performed. We present a lectin-based biosensing assay for glycoprofiling of serum biomarker PSA using different biosensor and biochip platforms such as label-free surface plasmon resonance (SPR) and microarray with fluorescent label. The results suggest significant differences in interaction of particular lectins with PSA. The antibody-based assay is frequently associated with the sensitivity, reproducibility, and cross-reactivity issues. Aptamers provide remarkable advantages over antibodies due to the nucleic acid origin, stability and no glycosylation. All these data are further step for construction of highly selective, sensitive and reliable sensors for early-stage diagnosis. The experimental set-up also holds promise for the development of comparable assays with other glycosylated disease biomarkers.

Keywords: biomarker, glycosylation, lectin, prostate cancer

Procedia PDF Downloads 407
1301 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 310
1300 Characterising Performative Technological Innovation: Developing a Strategic Framework That Incorporates the Social Mechanisms That Promote Change within a Technological Environment

Authors: Joan Edwards, J. Lawlor

Abstract:

Technological innovation is frequently defined in terms of bringing a new invention to market through a relatively straightforward process of diffusion. In reality, this process is complex and non-linear in nature, and includes social and cognitive factors that influence the development of an emerging technology and its related market or environment. As recent studies contend technological trajectory is part of technological paradigms, which arise from the expectations and desires of industry agents and results in co-evolution, it may be realised that social factors play a major role in the development of a technology. It is conjectured that collective social behaviour is fuelled by individual motivations and expectations, which inform the possibilities and uses for a new technology. The individual outlook highlights the issues present at the micro-level of developing a technology. Accordingly, this may be zoomed out to realise how these embedded social structures, influence activities and expectations at a macro level and can ultimately strategically shape the development and use of a technology. These social factors rely on communication to foster the innovation process. As innovation may be defined as the implementation of inventions, technological change results from the complex interactions and feedback occurring within an extended environment. The framework presented in this paper, recognises that social mechanisms provide the basis for an iterative dialogue between an innovator, a new technology, and an environment - within which social and cognitive ‘identity-shaping’ elements of the innovation process occur. Identity-shaping characteristics indicate that an emerging technology has a performative nature that transforms, alters, and ultimately configures the environment to which it joins. This identity–shaping quality is termed as ‘performative’. This paper examines how technologies evolve within a socio-technological sphere and how 'performativity' facilitates the process. A framework is proposed that incorporates the performative elements which are identified as feedback, iteration, routine, expectations, and motivations. Additionally, the concept of affordances is employed to determine how the role of the innovator and technology change over time - constituting a more conducive environment for successful innovation.

Keywords: affordances, framework, performativity, strategic innovation

Procedia PDF Downloads 207
1299 A Study on Adsorption Ability of MnO2 Nanoparticles to Remove Methyl Violet Dye from Aqueous Solution

Authors: Zh. Saffari, A. Naeimi, M. S. Ekrami-Kakhki, Kh. Khandan-Barani

Abstract:

The textile industries are becoming a major source of environmental contamination because an alarming amount of dye pollutants are generated during the dyeing processes. Organic dyes are one of the largest pollutants released into wastewater from textile and other industrial processes, which have shown severe impacts on human physiology. Nano-structure compounds have gained importance in this category due their anticipated high surface area and improved reactive sites. In recent years several novel adsorbents have been reported to possess great adsorption potential due to their enhanced adsorptive capacity. Nano-MnO2 has great potential applications in environment protection field and has gained importance in this category because it has a wide variety of structure with large surface area. The diverse structures, chemical properties of manganese oxides are taken advantage of in potential applications such as adsorbents, sensor catalysis and it is also used for wide catalytic applications, such as degradation of dyes. In this study, adsorption of Methyl Violet (MV) dye from aqueous solutions onto MnO2 nanoparticles (MNP) has been investigated. The surface characterization of these nano particles was examined by Particle size analysis, Scanning Electron Microscopy (SEM), Fourier Transform Infrared (FTIR) spectroscopy and X-Ray Diffraction (XRD). The effects of process parameters such as initial concentration, pH, temperature and contact duration on the adsorption capacities have been evaluated, in which pH has been found to be most effective parameter among all. The data were analyzed using the Langmuir and Freundlich for explaining the equilibrium characteristics of adsorption. And kinetic models like pseudo first- order, second-order model and Elovich equation were utilized to describe the kinetic data. The experimental data were well fitted with Langmuir adsorption isotherm model and pseudo second order kinetic model. The thermodynamic parameters, such as Free energy of adsorption (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°) were also determined and evaluated.

Keywords: MnO2 nanoparticles, adsorption, methyl violet, isotherm models, kinetic models, surface chemistry

Procedia PDF Downloads 258