Search results for: sound propagation models
5087 The Electric Car Wheel Hub Motor Work Analysis with the Use of 2D FEM Electromagnetic Method and 3D CFD Thermal Simulations
Authors: Piotr Dukalski, Bartlomiej Bedkowski, Tomasz Jarek, Tomasz Wolnik
Abstract:
The article is concerned with the design of an electric in wheel hub motor installed in an electric car with two-wheel drive. It presents the construction of the motor on the 3D cross-section model. Work simulation of the motor (applicated to Fiat Panda car) and selected driving parameters such as driving on the road with a slope of 20%, driving at maximum speed, maximum acceleration of the car from 0 to 100 km/h are considered by the authors in the article. The demand for the drive power taking into account the resistance to movement was determined for selected driving conditions. The parameters of the motor operation and the power losses in its individual elements, calculated using the FEM 2D method, are presented for the selected car driving parameters. The calculated power losses are used in 3D models for thermal calculations using the CFD method. Detailed construction of thermal models with materials data, boundary conditions and losses calculated using the FEM 2D method are presented in the article. The article presents and describes calculated temperature distributions in individual motor components such as winding, permanent magnets, magnetic core, body, cooling system components. Generated losses in individual motor components and their impact on the limitation of its operating parameters are described by authors. Attention is paid to the losses generated in permanent magnets, which are a source of heat as the removal of which from inside the motor is difficult. Presented results of calculations show how individual motor power losses, generated in different load conditions while driving, affect its thermal state.Keywords: electric car, electric drive, electric motor, thermal calculations, wheel hub motor
Procedia PDF Downloads 1755086 Enhancing of Flame Retardancy and Hydrophobicity of Cotton by Coating a Phosphorous, Silica, Nitrogen Containing Bio-Flame Retardant Liquid for Upholstery Application
Authors: Li Maksym, Prabhakar M. N., Jung-Il Song
Abstract:
In this study, a flame retardant and hydrophobic cotton textile were prepared by utilizing a renewable halogen-free bio-based solution based on chitosan, urea, and phytic acid, named bio-flame retardant liquid (BFL), through facile dip-coating technology. Deposition of BFL on the surface of the cotton was confirmed by Fourier-transform infrared spectroscopy and scanning electron microscope coupled with energy-dispersive X-ray spectrometer. Thermal and flame retardant properties of the cottons were studied with thermogravimetric analysis, differential scanning calorimetry, vertical flame test, cone calorimeter test. Only with 8.8% of dry weight gain treaded cotton showed self-extinguish properties during fire test. Cone calorimeter test revealed a reduction of peak heat release rate from 203.2 to 21 kW/m2 and total heat release from 20.1 to 2.8 MJ/m2. Incidentally, BFL remarkably improved the thermal stability of flame retardant cotton from expressed in an enhanced amount of char at 700 °C (6.7 vs. 33.5%). BFL initiates the formation of phosphorous and silica contain char layer whichrestrains the propagation of heat and oxygen to unburned materialstrengthen by the liberation of non-combustible gases, which reduce the concentration of flammable volatiles and oxygen hence reducing the flammability of cotton. In addition, hydrophobicity and specific ignition test for upholstery application were performed. In conjunction, the proposed flame retardant cotton is potentially translatable to be utilized as upholstery materials in public transport.Keywords: cotton farbic, flame retardancy, surface coating, intumescent mechanism
Procedia PDF Downloads 925085 An Experimental (Wind Tunnel) and Numerical (CFD) Study on the Flow over Hills
Authors: Tanit Daniel Jodar Vecina, Adriane Prisco Petry
Abstract:
The shape of the wind velocity profile changes according to local features of terrain shape and roughness, which are parameters responsible for defining the Atmospheric Boundary Layer (ABL) profile. Air flow characteristics over and around landforms, such as hills, are of considerable importance for applications related to Wind Farm and Turbine Engineering. The air flow is accelerated on top of hills, which can represent a decisive factor for Wind Turbine placement choices. The present work focuses on the study of ABL behavior as a function of slope and surface roughness of hill-shaped landforms, using the Computational Fluid Dynamics (CFD) to build wind velocity and turbulent intensity profiles. Reynolds-Averaged Navier-Stokes (RANS) equations are closed using the SST k-ω turbulence model; numerical results are compared to experimental data measured in wind tunnel over scale models of the hills under consideration. Eight hill models with slopes varying from 25° to 68° were tested for two types of terrain categories in 2D and 3D, and two analytical codes are used to represent the inlet velocity profiles. Numerical results for the velocity profiles show differences under 4% when compared to their respective experimental data. Turbulent intensity profiles show maximum differences around 7% when compared to experimental data; this can be explained by not being possible to insert inlet turbulent intensity profiles in the simulations. Alternatively, constant values based on the averages of the turbulent intensity at the wind tunnel inlet were used.Keywords: Atmospheric Boundary Layer, Computational Fluid Dynamic (CFD), Numerical Modeling, Wind Tunnel
Procedia PDF Downloads 3805084 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing
Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca de Marchi
Abstract:
This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes available a larger monitored area. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary chars the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.Keywords: data compression, ultrasonic communication, guided waves, FEM analysis
Procedia PDF Downloads 1245083 Between Efficacy and Danger: Narratives of Female University Students about Emergency Contraception Methods
Authors: Anthony Idowu Ajayi, Ezebunwa Ethelbert Nwokocha, Wilson Akpan, Oladele Vincent Adeniyi
Abstract:
Studies on emergency contraception (EC) mostly utilise quantitative methods and focus on medically approved drugs for the prevention of unwanted pregnancies. This methodological bias necessarily obscures insider perspectives on sexual behaviour, particularly on why specific methods are utilized by women who seek to prevent unplanned pregnancies. In order to privilege this perspective, with a view to further enriching the discourse and policy on the prevention and management of unplanned pregnancies, this paper brings together the findings from several focus groups and in-depth interviews conducted amongst unmarried female undergraduate students in two Nigerian universities. The study found that while the research participants had good knowledge of the consequences of unprotected sexual intercourses - with abstinence and condom widely used - participants’ willingness to rely only on medically sound measures to prevent unwanted pregnancies was not always mediated by such knowledge. Some of the methods favored by participants appeared to be those commonly associated with people of low socio-economic status in the society where the study was conducted. Medically unsafe concoctions, some outright dangerous, were widely believed to be efficacious in preventing unwanted pregnancy. Furthermore, respondents’ narratives about their sexual behaviour revealed that inadequate sex education, socio-economic pressures, and misconceptions about the efficacy of “crude” emergency contraception methods were all interrelated. The paper therefore suggests that these different facets of the unplanned pregnancy problem should be the focus of intervention.Keywords: unplanned pregnancy, unsafe abortion, emergency contraception, concoctions
Procedia PDF Downloads 4245082 A Tool for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for the easy creation of an institutional risk profile for endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support risk factors set up with just the most important values that are important for a particular organisation. Subsequently, the risk profile employs fuzzy models and associated configurations for the file format metadata aggregator to support digital preservation experts with a semi-automatic estimation of endangerment level for file formats. Our goal is to make use of a domain expert knowledge base aggregated from a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation and analysis of risk factors for a requried dimension. The proposed methods improve the visibility of risk factor information and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and automatically aggregated file format metadata from linked open data sources. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: digital information management, file format, endangerment analysis, fuzzy models
Procedia PDF Downloads 4045081 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review
Authors: Faisal Muhibuddin, Ani Dijah Rahajoe
Abstract:
This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review
Procedia PDF Downloads 665080 StockTwits Sentiment Analysis on Stock Price Prediction
Authors: Min Chen, Rubi Gupta
Abstract:
Understanding and predicting stock market movements is a challenging problem. It is believed stock markets are partially driven by public sentiments, which leads to numerous research efforts to predict stock market trend using public sentiments expressed on social media such as Twitter but with limited success. Recently a microblogging website StockTwits is becoming increasingly popular for users to share their discussions and sentiments about stocks and financial market. In this project, we analyze the text content of StockTwits tweets and extract financial sentiment using text featurization and machine learning algorithms. StockTwits tweets are first pre-processed using techniques including stopword removal, special character removal, and case normalization to remove noise. Features are extracted from these preprocessed tweets through text featurization process using bags of words, N-gram models, TF-IDF (term frequency-inverse document frequency), and latent semantic analysis. Machine learning models are then trained to classify the tweets' sentiment as positive (bullish) or negative (bearish). The correlation between the aggregated daily sentiment and daily stock price movement is then investigated using Pearson’s correlation coefficient. Finally, the sentiment information is applied together with time series stock data to predict stock price movement. The experiments on five companies (Apple, Amazon, General Electric, Microsoft, and Target) in a duration of nine months demonstrate the effectiveness of our study in improving the prediction accuracy.Keywords: machine learning, sentiment analysis, stock price prediction, tweet processing
Procedia PDF Downloads 1565079 Attribution Theory and Perceived Reliability of Cellphones for Teaching and Learning
Authors: Mayowa A. Sofowora, Seraphin D. Eyono Obono
Abstract:
The use of information and communication technologies such as computers, mobile phones and the internet is becoming prevalent in today’s world; and it is facilitating access to a vast amount of data, services, and applications for the improvement of people’s lives. However, this prevalence of ICTs is hampered by the problem of low income levels in developing countries to the point where people cannot timeously replace or repair their ICT devices when damaged or lost; and this problem serves as a motivation for this study whose aim is to examine the perceptions of teachers on the reliability of cellphones when used for teaching and learning purposes. The research objectives unfolding this aim are of two types: objectives on the selection and design of theories and models, and objectives on the empirical testing of these theories and models. The first type of objectives is achieved using content analysis in an extensive literature survey, and the second type of objectives is achieved through a survey of high school teachers from the ILembe and Umgungudlovu districts in the KwaZuluNatal province of South Africa. Data collected from this questionnaire based survey is analysed in SPSS using descriptive statistics and Pearson correlations after checking the reliability and validity of the questionnaire. The main hypothesis driving this study is that there is a relationship between the demographics and the attribution identity of teachers on one hand, and their perceptions on the reliability of cellphones on the other hand, as suggested by existing literature; except that attribution identities are considered in this study under three angles: intention, knowledge and ability, and action. The results of this study confirm that the perceptions of teachers on the reliability of cellphones for teaching and learning are affected by the school location of these teachers, and by their perceptions on learners’ cellphones usage intentions and actual use.Keywords: attribution, cellphones, e-learning, reliability
Procedia PDF Downloads 4025078 Findings on Modelling Carbon Dioxide Concentration Scenarios in the Nairobi Metropolitan Region before and during COVID-19
Authors: John Okanda Okwaro
Abstract:
Carbon (IV) oxide (CO₂) is emitted majorly from fossil fuel combustion and industrial production. The sources of interest of carbon (IV) oxide in the study area are mining activities, transport systems, and industrial processes. This study is aimed at building models that will help in monitoring the emissions within the study area. Three scenarios were discussed, namely: pessimistic scenario, business-as-usual scenario, and optimistic scenario. The result showed that there was a reduction in carbon dioxide concentration by approximately 50.5 ppm between March 2020 and January 2021 inclusive. This is majorly due to reduced human activities that led to decreased consumption of energy. Also, the CO₂ concentration trend follows the business-as-usual scenario (BAU) path. From the models, the pessimistic, business-as-usual, and optimistic scenarios give CO₂ concentration of about 545.9 ppm, 408.1 ppm, and 360.1 ppm, respectively, on December 31st, 2021. This research helps paint the picture to the policymakers of the relationship between energy sources and CO₂ emissions. Since the reduction in CO₂ emission was due to decreased use of fossil fuel as there was a decrease in economic activities, then if Kenya relies more on green energy than fossil fuel in the post-COVID-19 period, there will be more CO₂ emission reduction. That is, the CO₂ concentration trend is likely to follow the optimistic scenario path, hence a reduction in CO₂ concentration of about 48 ppm by the end of the year 2021. This research recommends investment in solar energy by energy-intensive companies, mine machinery and equipment maintenance, investment in electric vehicles, and doubling tree planting efforts to achieve the 10% cover.Keywords: forecasting, greenhouse gas, green energy, hierarchical data format
Procedia PDF Downloads 1685077 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies
Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey
Abstract:
Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.Keywords: climate change, downscaling, GCM, RCM
Procedia PDF Downloads 4065076 Investigating Effect of Geometrical Proportions in Islamic Architecture and Music
Authors: Amir Hossein Allahdadi
Abstract:
The mystical and intuitive look of Islamic artists inspired by the Koranic and mystical principles and also based on the geometry and mathematics has left unique works whose range extends across the borders of Islam. The relationship between Islamic art and music in the traditional art is of one of the concepts that can be traced back to the other arts by detection of its components. One of the links is the art of painting whose subtleties that can be applicable to both architecture and music. So, architecture and music links can be traced in other arts with a traditional foundation in order to evaluate the equivalents of traditional arts. What is the relationship between physical space of architecture and nonphysical space of music? What is musical architecture? What is the music that tends to architecture? These questions are very small samples of the questions that arise in this category, and these questions and concerns remain as long as the music is played and the architecture is made. Efforts have been made in this area, references compiled and plans drawn. As an example, we can refer to views of ‘Mansour Falamaki’ in the book of architecture and music, as well as the book transition from mud to heart by ‘Hesamodin Seraj’. The method is such that a certain melody is given to an architect and it is tried to design a specified architecture using a certain theme. This study is not to follow the architecture of a particular type of music and the formation of a volume based on a sound. In this opportunity, it is tried to briefly review the relationship between music and architecture in the Iranian original and traditional arts, using the basic definitions of arts. The musician plays, the architect designs, the actor forms his desired space and painter displays his multi-dimensional world in the form of two-dimensions. The expression language is different, but all of them can be gathered in a form, a form which has no clear boundaries. In fact, in any original art, the artist applies his art as a tool to express his insights which are nothing but achieving the world beyond this place and time.Keywords: architecture, music, geometric proportions, mathematical proportions
Procedia PDF Downloads 2445075 User Authentication Using Graphical Password with Sound Signature
Authors: Devi Srinivas, K. Sindhuja
Abstract:
This paper presents architecture to improve surveillance applications based on the usage of the service oriented paradigm, with smart phones as user terminals, allowing application dynamic composition and increasing the flexibility of the system. According to the result of moving object detection research on video sequences, the movement of the people is tracked using video surveillance. The moving object is identified using the image subtraction method. The background image is subtracted from the foreground image, from that the moving object is derived. So the Background subtraction algorithm and the threshold value is calculated to find the moving image by using background subtraction algorithm the moving frame is identified. Then, by the threshold value the movement of the frame is identified and tracked. Hence, the movement of the object is identified accurately. This paper deals with low-cost intelligent mobile phone-based wireless video surveillance solution using moving object recognition technology. The proposed solution can be useful in various security systems and environmental surveillance. The fundamental rule of moving object detecting is given in the paper, then, a self-adaptive background representation that can update automatically and timely to adapt to the slow and slight changes of normal surroundings is detailed. While the subtraction of the present captured image and the background reaches a certain threshold, a moving object is measured to be in the current view, and the mobile phone will automatically notify the central control unit or the user through SMS (Short Message System). The main advantage of this system is when an unknown image is captured by the system it will alert the user automatically by sending an SMS to user’s mobile.Keywords: security, graphical password, persuasive cued click points
Procedia PDF Downloads 5375074 Design of Microwave Building Block by Using Numerical Search Algorithm
Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Qing Fang, Mingbin Yu, Guoqiang Lo
Abstract:
With the development of technology, countries gradually allocated more and more frequency spectrums for civilization and commercial usage, especially those high radio frequency bands indicating high information capacity. The field effect becomes more and more prominent in microwave components as frequency increases, which invalidates the transmission line theory and complicate the design of microwave components. Here a modeling approach based on numerical search algorithm is proposed to design various building blocks for microwave circuits to avoid complicated impedance matching and equivalent electrical circuit approximation. Concretely, a microwave component is discretized to a set of segments along the microwave propagation path. Each of the segment is initialized with random dimensions, which constructs a multiple-dimension parameter space. Then numerical searching algorithms (e.g. Pattern search algorithm) are used to find out the ideal geometrical parameters. The optimal parameter set is achieved by evaluating the fitness of S parameters after a number of iterations. We had adopted this approach in our current projects and designed many microwave components including sharp bends, T-branches, Y-branches, microstrip-to-stripline converters and etc. For example, a stripline 90° bend was designed in 2.54 mm x 2.54 mm space for dual-band operation (Ka band and Ku band) with < 0.18 dB insertion loss and < -55 dB reflection. We expect that this approach can enrich the tool kits for microwave designers.Keywords: microwave component, microstrip and stripline, bend, power division, the numerical search algorithm.
Procedia PDF Downloads 3795073 CFD Analysis of the Blood Flow in Left Coronary Bifurcation with Variable Angulation
Authors: Midiya Khademi, Ali Nikoo, Shabnam Rahimnezhad Baghche Jooghi
Abstract:
Cardiovascular diseases (CVDs) are the main cause of death globally. Most CVDs can be prevented by avoiding habitual risk factors. Separate from the habitual risk factors, there are some inherent factors in each individual that can increase the risk potential of CVDs. Vessel shapes and geometry are influential factors, having great impact on the blood flow and the hemodynamic behavior of the vessels. In the present study, the influence of bifurcation angle on blood flow characteristics is studied. In order to approach this topic, by simplifying the details of the bifurcation, three models with angles 30°, 45°, and 60° were created, then by using CFD analysis, the response of these models for stable flow and pulsatile flow was studied. In the conducted simulation in order to eliminate the influence of other geometrical factors, only the angle of the bifurcation was changed and other parameters remained constant during the research. Simulations are conducted under dynamic and stable condition. In the stable flow simulation, a steady velocity of 0.17 m/s at the inlet plug was maintained and in dynamic simulations, a typical LAD flow waveform is implemented. The results show that the bifurcation angle has an influence on the maximum speed of the flow. In the stable flow condition, increasing the angle lead to decrease the maximum flow velocity. In the dynamic flow simulations, increasing the bifurcation angle lead to an increase in the maximum velocity. Since blood flow has pulsatile characteristics, using a uniform velocity during the simulations can lead to a discrepancy between the actual results and the calculated results.Keywords: coronary artery, cardiovascular disease, bifurcation, atherosclerosis, CFD, artery wall shear stress
Procedia PDF Downloads 1645072 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks
Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez
Abstract:
Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning
Procedia PDF Downloads 3395071 Micro-Meso 3D FE Damage Modelling of Woven Carbon Fibre Reinforced Plastic Composite under Quasi-Static Bending
Authors: Aamir Mubashar, Ibrahim Fiaz
Abstract:
This research presents a three-dimensional finite element modelling strategy to simulate damage in a quasi-static three-point bending analysis of woven twill 2/2 type carbon fibre reinforced plastic (CFRP) composite on a micro-meso level using cohesive zone modelling technique. A meso scale finite element model comprised of a number of plies was developed in the commercial finite element code Abaqus/explicit. The interfaces between the plies were explicitly modelled using cohesive zone elements to allow for debonding by crack initiation and propagation. Load-deflection response of the CRFP within the quasi-static range was obtained and compared with the data existing in the literature. This provided validation of the model at the global scale. The outputs resulting from the global model were then used to develop a simulation model capturing the micro-meso scale material features. The sub-model consisted of a refined mesh representative volume element (RVE) modelled in texgen software, which was later embedded with cohesive elements in the finite element software environment. The results obtained from the developed strategy were successful in predicting the overall load-deflection response and the damage in global and sub-model at the flexure limit of the specimen. Detailed analysis of the effects of the micro-scale features was carried out.Keywords: woven composites, multi-scale modelling, cohesive zone, finite element model
Procedia PDF Downloads 1385070 Modelling of Pipe Jacked Twin Tunnels in a Very Soft Clay
Authors: Hojjat Mohammadi, Randall Divito, Gary J. E. Kramer
Abstract:
Tunnelling and pipe jacking in very soft soils (fat clays), even with an Earth Pressure Balance tunnel boring machine (EPBM), can cause large ground displacements. In this study, the short-term and long-term ground and tunnel response is predicted for twin, pipe-jacked EPBM 3 meter diameter tunnels with a narrow pillar width. Initial modelling indicated complete closure of the annulus gap at the tail shield onto the centrifugally cast, glass-fiber-reinforced, polymer mortar jacking pipe (FRP). Numerical modelling was employed to simulate the excavation and support installation sequence, examine the ground response during excavation, confirm the adequacy of the pillar width and check the structural adequacy of the installed pipe. In the numerical models, Mohr-Coulomb constitutive model with the effect of unloading was adopted for the fat clays, while for the bedrock layer, the generalized Hoek-Brown was employed. The numerical models considered explicit excavation sequences and different levels of ground convergence prior to support installation. The well-studied excavation sequences made the analysis possible for this study on a very soft clay, otherwise, obtaining the convergency in the numerical analysis would be impossible. The predicted results indicate that the ground displacements around the tunnel and its effect on the pipe would be acceptable despite predictions of large zones of plastic behaviour around the tunnels and within the entire pillar between them due to excavation-induced ground movements.Keywords: finite element modeling (FEM), pipe-jacked tunneling, very soft clay, EPBM
Procedia PDF Downloads 825069 Interactions between Residential Mobility, Car Ownership and Commute Mode: The Case for Melbourne
Authors: Solmaz Jahed Shiran, John Hearne, Tayebeh Saghapour
Abstract:
Daily travel behavior is strongly influenced by the location of the places of residence, education, and employment. Hence a change in those locations due to a move or changes in an occupation leads to a change in travel behavior. Given the interventions of housing mobility and travel behaviors, the hypothesis is that a mobile housing market allows households to move as a result of any change in their life course, allowing them to be closer to central services, public transport facilities and workplace and hence reducing the time spent by individuals on daily travel. Conversely, household’s immobility may lead to longer commutes of residents, for example, after a change of a job or a need for new services such as schools for children who have reached their school age. This paper aims to investigate the association between residential mobility and travel behavior. The Victorian Integrated Survey of Travel and Activity (VISTA) data is used for the empirical analysis. Car ownership and journey to work time and distance of employed people are used as indicators of travel behavior. Change of usual residence within the last five years used to identify movers and non-movers. Statistical analysis, including regression models, is used to compare the travel behavior of movers and non-movers. The results show travel time, and the distance does not differ for movers and non-movers. However, this is not the case when taking into account the residence tenure-type. In addition, car ownership rate and number found to be significantly higher for non-movers. It is hoped that the results from this study will contribute to a better understanding of factors other than common socioeconomic and built environment features influencing travel behavior.Keywords: journey to work, regression models, residential mobility, commute mode, car ownership
Procedia PDF Downloads 1345068 Architectural Visualization: From Ancient Civilizations to the Roman Empire
Authors: Matthias Stange
Abstract:
Architectural visualization has been practiced for as long as there have been buildings. Visualization (lat.: visibilis "visible") generally refers to bringing abstract data and relationships into a graphically, visually comprehensible form. Particularly, visualization refers to the process of translating relationships that are difficult to formulate linguistically or logically into visual media (e.g., drawings or models) to make them comprehensible. Building owners have always been interested in knowing how their building will look before it is built. In the empirical part of this study, the roots of architectural visualization are examined, starting from the ancient civilizations to the end of the Roman Empire. Extensive literature research on architectural theory and architectural history forms the basis for this analysis. The focus of the analysis is basic research from the emergence of the first two-dimensional drawings in the Neolithic period to the triggers of significant further developments of architectural representation, as well as their importance for subsequent methods and the transmission of knowledge over the following epochs. The analysis focuses on the development of analog methods of representation from the first Neolithic house floor plans to the Greek detailed stone models and paper drawings in the Roman Empire. In particular, the question of socio-cultural, socio-political, and economic changes as possible triggers for the development of representational media and methods will be analyzed. The study has shown that the development of visual building representation has been driven by scientific, technological, and social developments since the emergence of the first civilizations more than 6000 years ago first by the change in human’s subsistence strategy, from food appropriation by hunting and gathering to food production by agriculture and livestock, and the sedentary lifestyle required for this.Keywords: ancient Greece, ancient orient, Roman Empire, architectural visualization
Procedia PDF Downloads 1165067 Construction and Validation of a Hybrid Lumbar Spine Model for the Fast Evaluation of Intradiscal Pressure and Mobility
Authors: Dicko Ali Hamadi, Tong-Yette Nicolas, Gilles Benjamin, Faure Francois, Palombi Olivier
Abstract:
A novel hybrid model of the lumbar spine, allowing fast static and dynamic simulations of the disc pressure and the spine mobility, is introduced in this work. Our contribution is to combine rigid bodies, deformable finite elements, articular constraints, and springs into a unique model of the spine. Each vertebra is represented by a rigid body controlling a surface mesh to model contacts on the facet joints and the spinous process. The discs are modeled using a heterogeneous tetrahedral finite element model. The facet joints are represented as elastic joints with six degrees of freedom, while the ligaments are modeled using non-linear one-dimensional elastic elements. The challenge we tackle is to make these different models efficiently interact while respecting the principles of Anatomy and Mechanics. The mobility, the intradiscal pressure, the facet joint force and the instantaneous center of rotation of the lumbar spine are validated against the experimental and theoretical results of the literature on flexion, extension, lateral bending as well as axial rotation. Our hybrid model greatly simplifies the modeling task and dramatically accelerates the simulation of pressure within the discs, as well as the evaluation of the range of motion and the instantaneous centers of rotation, without penalizing precision. These results suggest that for some types of biomechanical simulations, simplified models allow far easier modeling and faster simulations compared to usual full-FEM approaches without any loss of accuracy.Keywords: hybrid, modeling, fast simulation, lumbar spine
Procedia PDF Downloads 3065066 Lessons of Passive Environmental Design in the Sarabhai and Shodan Houses by Le Corbusier
Authors: Juan Sebastián Rivera Soriano, Rosa Urbano Gutiérrez
Abstract:
The Shodan House and the Sarabhai House (Ahmedabad, India, 1954 and 1955, respectively) are considered some of the most important works of Le Corbusier produced in the last stage of his career. There are some academic publications that study the compositional and formal aspects of their architectural design, but there is no in-depth investigation into how the climatic conditions of this region were a determining factor in the design decisions implemented in these projects. This paper argues that Le Corbusier developed a specific architectural design strategy for these buildings based on scientific research on climate in the Indian context. This new language was informed by a pioneering study and interpretation of climatic data as a design methodology that would even involve the development of new design tools. This study investigated whether their use of climatic data meets values and levels of accuracy obtained with contemporary instruments and tools, such as Energy Plus weather data files and Climate Consultant. It also intended to find out if Le Corbusier's office’s intentions and decisions were indeed appropriate and efficient for those climate conditions by assessing these projects using BIM models and energy performance simulations from Design Builder. Accurate models were built using original historical data through archival research. The outcome is to provide a new understanding of the environment of these houses through the combination of modern building science and architectural history. The results confirm that in these houses, it was achieved a model of low energy consumption. This paper contributes new evidence not only on exemplary modern architecture concerned with environmental performance but also on how it developed progressive thinking in this direction.Keywords: bioclimatic architecture, Le Corbusier, Shodan, Sarabhai Houses
Procedia PDF Downloads 655065 Experimental Monitoring of the Parameters of the Ionosphere in the Local Area Using the Results of Multifrequency GNSS-Measurements
Authors: Andrey Kupriyanov
Abstract:
In recent years, much attention has been paid to the problems of ionospheric disturbances and their influence on the signals of global navigation satellite systems (GNSS) around the world. This is due to the increase in solar activity, the expansion of the scope of GNSS, the emergence of new satellite systems, the introduction of new frequencies and many others. The influence of the Earth's ionosphere on the propagation of radio signals is an important factor in many applied fields of science and technology. The paper considers the application of the method of transionospheric sounding using measurements from signals from Global Navigation Satellite Systems to determine the TEC distribution and scintillations of the ionospheric layers. To calculate these parameters, the International Reference Ionosphere (IRI) model of the ionosphere, refined in the local area, is used. The organization of operational monitoring of ionospheric parameters is analyzed using several NovAtel GPStation6 base stations. It allows performing primary processing of GNSS measurement data, calculating TEC and fixing scintillation moments, modeling the ionosphere using the obtained data, storing data and performing ionospheric correction in measurements. As a result of the study, it was proved that the use of the transionospheric sounding method for reconstructing the altitude distribution of electron concentration in different altitude range and would provide operational information about the ionosphere, which is necessary for solving a number of practical problems in the field of many applications. Also, the use of multi-frequency multisystem GNSS equipment and special software will allow achieving the specified accuracy and volume of measurements.Keywords: global navigation satellite systems (GNSS), GPstation6, international reference ionosphere (IRI), ionosphere, scintillations, total electron content (TEC)
Procedia PDF Downloads 1815064 Dissolution Kinetics of Chevreul’s Salt in Ammonium Cloride Solutions
Authors: Mustafa Sertçelik, Turan Çalban, Hacali Necefoğlu, Sabri Çolak
Abstract:
In this study, Chevreul’s salt solubility and its dissolution kinetics in ammonium chloride solutions were investigated. Chevreul’s salt that we used in the studies was obtained by using the optimum conditions (ammonium sulphide concentration; 0,4 M, copper sulphate concentration; 0,25 M, temperature; 60°C, stirring speed; 600 rev/min, pH; 4 and reaction time; 15 mins) determined by T. Çalban et al. Chevreul’s salt solubility in ammonium chloride solutions and the kinetics of dissolution were investigated. The selected parameters that affect solubility were reaction temperature, concentration of ammonium chloride, stirring speed, and solid/liquid ratio. Correlation of experimental results had been achieved using linear regression implemented in the statistical package program statistica. The effect of parameters on Chevreul’s salt solubility was examined and integrated rate expression of dissolution rate was found using kinetic models in solid-liquid heterogeneous reactions. The results revealed that the dissolution rate of Chevreul’s salt was decreasing while temperature, concentration of ammonium chloride and stirring speed were increasing. On the other hand, dissolution rate was found to be decreasing with the increase of solid/liquid ratio. Based on result of the applications of the obtained experimental results to the kinetic models, we can deduce that Chevreul’s salt dissolution rate is controlled by diffusion through the ash (or product layer). Activation energy of the reaction of dissolution was found as 74.83 kJ/mol. The integrated rate expression along with the effects of parameters on Chevreul's salt solubility was found to be as follows: 1-3(1-X)2/3+2(1-X)= [2,96.1013.(CA)3,08 .(S/L)-038.(W)1,23 e-9001,2/T].tKeywords: Chevreul's salt, copper, ammonium chloride, ammonium sulphide, dissolution kinetics
Procedia PDF Downloads 3085063 Analysis of Capillarity Phenomenon Models in Primary and Secondary Education in Spain: A Case Study on the Design, Implementation, and Analysis of an Inquiry-Based Teaching Sequence
Authors: E. Cascarosa-Salillas, J. Pozuelo-Muñoz, C. Rodríguez-Casals, A. de Echave
Abstract:
This study focuses on improving the understanding of the capillarity phenomenon among Primary and Secondary Education students. Despite being a common concept in daily life and covered in various subjects, students’ comprehension remains limited. This work explores inquiry-based teaching methods to build a conceptual foundation of capillarity by examining the forces involved. The study adopts an inquiry-based teaching approach supported by research emphasizing the importance of modeling in science education. Scientific modeling aids students in applying knowledge across varied contexts and developing systemic thinking, allowing them to construct scientific models applicable to everyday situations. This methodology fosters the development of scientific competencies such as observation, hypothesis formulation, and communication. The research was structured as a case study with activities designed for Spanish Primary and Secondary Education students aged 9 to 13. The process included curriculum analysis, the design of an activity sequence, and its implementation in classrooms. Implementation began with questions that students needed to resolve using available materials, encouraging observation, experimentation, and the re-contextualization of activities to everyday phenomena where capillarity is observed. Data collection tools included audio and video recordings of the sessions, which were transcribed and analyzed alongside the students' written work. Students' drawings on capillarity were also collected and categorized. Qualitative analyses of the activities showed that, through inquiry, students managed to construct various models of capillarity, reflecting an improved understanding of the phenomenon. Initial activities allowed students to express prior ideas and formulate hypotheses, which were then refined and expanded in subsequent sessions. The generalization and use of graphical representations of their ideas on capillarity, analyzed alongside their written work, enabled the categorization of capillarity models: Intuitive Model: A visual and straightforward representation without explanations of how or why it occurs. Simple symbolic elements, such as arrows to indicate water rising, are used without detailed or causal understanding. It reflects an initial, immediate perception of the phenomenon, interpreted as something that happens "on its own" without delving into the microscopic level. Explanatory Intuitive Model: Students begin to incorporate causal explanations, though still limited and without complete scientific accuracy. They represent the role of materials and use basic terms such as ‘absorption’ or ‘attraction’ to describe the rise of water. This model shows a more complex understanding where the phenomenon is not only observed but also partially explained in terms of interaction, though without microscopic detail. School Scientific Model: This model reflects a more advanced and detailed understanding. Students represent the phenomenon using specific scientific concepts like ‘surface tension,’ cohesion,’ and ‘adhesion,’ including structured explanations connecting microscopic and macroscopic levels. At this level, students model the phenomenon as a coherent system, demonstrating how various forces or properties interact in the capillarity process, with representations on a microscopic level. The study demonstrated that the capillarity phenomenon can be effectively approached in class through the experimental observation of everyday phenomena, explained through guided inquiry learning. The methodology facilitated students’ construction of capillarity models and served to analyze an interaction phenomenon of different forces occurring at the microscopic level.Keywords: capillarity, inquiry-based learning, scientific modeling, primary and secondary education, conceptual understanding, Drawing analysis.
Procedia PDF Downloads 145062 Existing International Cooperation Mechanisms and Proposals to Enhance Their Effectiveness for Marine-Based Geoengineering Governance
Authors: Aylin Mohammadalipour Tofighi
Abstract:
Marine-based geoengineering methods, proposed to mitigate climate change, operate primarily through two mechanisms: reducing atmospheric carbon dioxide levels and diminishing solar absorption by the oceans. While these approaches promise beneficial outcomes, they are fraught with environmental, legal, ethical, and political challenges, necessitating robust international governance. This paper underscores the critical role of international cooperation within the governance framework, offering a focused analysis of existing international environmental mechanisms applicable to marine-based geoengineering governance. It evaluates the efficacy and limitations of current international legal structures, including treaties and organizations, in managing marine-based geoengineering, noting significant gaps such as the absence of specific regulations, dedicated international entities, and explicit governance mechanisms such as monitoring. To rectify these problems, the paper advocates for concrete steps to bolster international cooperation. These include the formulation of dedicated marine-based geoengineering guidelines within international agreements, the establishment of specialized supervisory entities, and the promotion of transparent, global consensus-building. These recommendations aim to foster governance that is environmentally sustainable, ethically sound, and politically feasible, thereby enhancing knowledge exchange, spurring innovation, and advancing the development of marine-based geoengineering approaches. This study emphasizes the importance of collaborative approaches in managing the complexities of marine-based geoengineering, contributing significantly to the discourse on international environmental governance in the face of rapid climate and technological changes.Keywords: climate change, environmental law, international cooperation, international governance, international law, marine-based geoengineering, marine law, regulatory frameworks
Procedia PDF Downloads 745061 Kýklos Dimensional Geometry: Entity Specific Core Measurement System
Authors: Steven D. P Moore
Abstract:
A novel method referred to asKýklos(Ky) dimensional geometry is proposed as an entity specific core geometric dimensional measurement system. Ky geometric measures can constructscaled multi-dimensionalmodels using regular and irregular sets in IRn. This entity specific-derived geometric measurement system shares similar fractal methods in which a ‘fractal transformation operator’ is applied to a set S to produce a union of N copies. The Kýklos’ inputs use 1D geometry as a core measure. One-dimensional inputs include the radius interval of a circle/sphere or the semiminor/semimajor axes intervals of an ellipse or spheroid. These geometric inputs have finite values that can be measured by SI distance units. The outputs for each interval are divided and subdivided 1D subcomponents with a union equal to the interval geometry/length. Setting a limit of subdivision iterations creates a finite value for each 1Dsubcomponent. The uniqueness of this method is captured by allowing the simplest 1D inputs to define entity specific subclass geometric core measurements that can also be used to derive length measures. Current methodologies for celestial based measurement of time, as defined within SI units, fits within this methodology, thus combining spatial and temporal features into geometric core measures. The novel Ky method discussed here offers geometric measures to construct scaled multi-dimensional structures, even models. Ky classes proposed for consideration include celestial even subatomic. The application of this offers incredible possibilities, for example, geometric architecture that can represent scaled celestial models that incorporates planets (spheroids) and celestial motion (elliptical orbits).Keywords: Kyklos, geometry, measurement, celestial, dimension
Procedia PDF Downloads 1665060 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 1285059 Climate Changes in Albania and Their Effect on Cereal Yield
Authors: Lule Basha, Eralda Gjika
Abstract:
This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest
Procedia PDF Downloads 925058 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales
Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias
Abstract:
Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline
Procedia PDF Downloads 80