Search results for: innovation maturity models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8610

Search results for: innovation maturity models

5490 Evaluation of the Effect of Turbulence Caused by the Oscillation Grid on Oil Spill in Water Column

Authors: Mohammad Ghiasvand, Babak Khorsandi, Morteza Kolahdoozan

Abstract:

Under the influence of waves, oil in the sea is subject to vertical scattering in the water column. Scientists' knowledge of how oil is dispersed in the water column is one of the lowest levels of knowledge among other processes affecting oil in the marine environment, which highlights the need for research and study in this field. Therefore, this study investigates the distribution of oil in the water column in a turbulent environment with zero velocity characteristics. Lack of laboratory results to analyze the distribution of petroleum pollutants in deep water for information Phenomenon physics on the one hand and using them to calibrate numerical models on the other hand led to the development of laboratory models in research. According to the aim of the present study, which is to investigate the distribution of oil in homogeneous and isotropic turbulence caused by the oscillating Grid, after reaching the ideal conditions, the crude oil flow was poured onto the water surface and oil was distributed in deep water due to turbulence was investigated. In this study, all experimental processes have been implemented and used for the first time in Iran, and the study of oil diffusion in the water column was considered one of the key aspects of pollutant diffusion in the oscillating Grid environment. Finally, the required oscillation velocities were taken at depths of 10, 15, 20, and 25 cm from the water surface and used in the analysis of oil diffusion due to turbulence parameters. The results showed that with the characteristics of the present system in two static modes and network motion with a frequency of 0.8 Hz, the results of oil diffusion in the four mentioned depths at a frequency of 0.8 Hz compared to the static mode from top to bottom at 26.18, 57 31.5, 37.5 and 50% more. Also, after 2.5 minutes of the oil spill at a frequency of 0.8 Hz, oil distribution at the mentioned depths increased by 49, 61.5, 85, and 146.1%, respectively, compared to the base (static) state.

Keywords: homogeneous and isotropic turbulence, oil distribution, oscillating grid, oil spill

Procedia PDF Downloads 75
5489 Lateral Torsional Buckling Investigation on Welded Q460GJ Structural Steel Unrestrained Beams under a Point Load

Authors: Yue Zhang, Bo Yang, Gang Xiong, Mohamed Elchalakanic, Shidong Nie

Abstract:

This study aims to investigate the lateral torsional buckling of I-shaped cross-section beams fabricated from Q460GJ structural steel plates. Both experimental and numerical simulation results are presented in this paper. A total of eight specimens were tested under a three-point bending, and the corresponding numerical models were established to conduct parametric studies. The effects of some key parameters such as the non-dimensional member slenderness and the height-to-width ratio, were investigated based on the verified numerical models. Also, the results obtained from the parametric studies were compared with the predictions calculated by different design codes including the Chinese design code (GB50017-2003, 2003), the new draft version of Chinese design code (GB50017-201X, 2012), Eurocode 3 (EC3, 2005) and the North America design code (ANSI/AISC360-10, 2010). These comparisons indicated that the sectional height-to-width ratio does not play an important role to influence the overall stability load-carrying capacity of Q460GJ structural steel beams with welded I-shaped cross-sections. It was also found that the design methods in GB50017-2003 and ANSI/AISC360-10 overestimate the overall stability and load-carrying capacity of Q460GJ welded I-shaped cross-section beams.

Keywords: experimental study, finite element analysis, global stability, lateral torsional buckling, Q460GJ structural steel

Procedia PDF Downloads 328
5488 The Electric Car Wheel Hub Motor Work Analysis with the Use of 2D FEM Electromagnetic Method and 3D CFD Thermal Simulations

Authors: Piotr Dukalski, Bartlomiej Bedkowski, Tomasz Jarek, Tomasz Wolnik

Abstract:

The article is concerned with the design of an electric in wheel hub motor installed in an electric car with two-wheel drive. It presents the construction of the motor on the 3D cross-section model. Work simulation of the motor (applicated to Fiat Panda car) and selected driving parameters such as driving on the road with a slope of 20%, driving at maximum speed, maximum acceleration of the car from 0 to 100 km/h are considered by the authors in the article. The demand for the drive power taking into account the resistance to movement was determined for selected driving conditions. The parameters of the motor operation and the power losses in its individual elements, calculated using the FEM 2D method, are presented for the selected car driving parameters. The calculated power losses are used in 3D models for thermal calculations using the CFD method. Detailed construction of thermal models with materials data, boundary conditions and losses calculated using the FEM 2D method are presented in the article. The article presents and describes calculated temperature distributions in individual motor components such as winding, permanent magnets, magnetic core, body, cooling system components. Generated losses in individual motor components and their impact on the limitation of its operating parameters are described by authors. Attention is paid to the losses generated in permanent magnets, which are a source of heat as the removal of which from inside the motor is difficult. Presented results of calculations show how individual motor power losses, generated in different load conditions while driving, affect its thermal state.

Keywords: electric car, electric drive, electric motor, thermal calculations, wheel hub motor

Procedia PDF Downloads 175
5487 An Experimental (Wind Tunnel) and Numerical (CFD) Study on the Flow over Hills

Authors: Tanit Daniel Jodar Vecina, Adriane Prisco Petry

Abstract:

The shape of the wind velocity profile changes according to local features of terrain shape and roughness, which are parameters responsible for defining the Atmospheric Boundary Layer (ABL) profile. Air flow characteristics over and around landforms, such as hills, are of considerable importance for applications related to Wind Farm and Turbine Engineering. The air flow is accelerated on top of hills, which can represent a decisive factor for Wind Turbine placement choices. The present work focuses on the study of ABL behavior as a function of slope and surface roughness of hill-shaped landforms, using the Computational Fluid Dynamics (CFD) to build wind velocity and turbulent intensity profiles. Reynolds-Averaged Navier-Stokes (RANS) equations are closed using the SST k-ω turbulence model; numerical results are compared to experimental data measured in wind tunnel over scale models of the hills under consideration. Eight hill models with slopes varying from 25° to 68° were tested for two types of terrain categories in 2D and 3D, and two analytical codes are used to represent the inlet velocity profiles. Numerical results for the velocity profiles show differences under 4% when compared to their respective experimental data. Turbulent intensity profiles show maximum differences around 7% when compared to experimental data; this can be explained by not being possible to insert inlet turbulent intensity profiles in the simulations. Alternatively, constant values based on the averages of the turbulent intensity at the wind tunnel inlet were used.

Keywords: Atmospheric Boundary Layer, Computational Fluid Dynamic (CFD), Numerical Modeling, Wind Tunnel

Procedia PDF Downloads 380
5486 A Tool for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the easy creation of an institutional risk profile for endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support risk factors set up with just the most important values that are important for a particular organisation. Subsequently, the risk profile employs fuzzy models and associated configurations for the file format metadata aggregator to support digital preservation experts with a semi-automatic estimation of endangerment level for file formats. Our goal is to make use of a domain expert knowledge base aggregated from a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation and analysis of risk factors for a requried dimension. The proposed methods improve the visibility of risk factor information and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and automatically aggregated file format metadata from linked open data sources. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: digital information management, file format, endangerment analysis, fuzzy models

Procedia PDF Downloads 404
5485 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review

Authors: Faisal Muhibuddin, Ani Dijah Rahajoe

Abstract:

This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.

Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review

Procedia PDF Downloads 66
5484 Design and Development of a Computerized Medical Record System for Hospitals in Remote Areas

Authors: Grace Omowunmi Soyebi

Abstract:

A computerized medical record system is a collection of medical information about a person that is stored on a computer. One principal problem of most hospitals in rural areas is using the file management system for keeping records. A lot of time is wasted when a patient visits the hospital, probably in an emergency, and the nurse or attendant has to search through voluminous files before the patient's file can be retrieved; this may cause an unexpected to happen to the patient. This data mining application is to be designed using a structured system analysis and design method which will help in a well-articulated analysis of the existing file management system, feasibility study, and proper documentation of the design and implementation of a computerized medical record system. This computerized system will replace the file management system and help to quickly retrieve a patient's record with increased data security, access clinical records for decision-making, and reduce the time range at which a patient gets attended to.

Keywords: programming, data, software development, innovation

Procedia PDF Downloads 87
5483 StockTwits Sentiment Analysis on Stock Price Prediction

Authors: Min Chen, Rubi Gupta

Abstract:

Understanding and predicting stock market movements is a challenging problem. It is believed stock markets are partially driven by public sentiments, which leads to numerous research efforts to predict stock market trend using public sentiments expressed on social media such as Twitter but with limited success. Recently a microblogging website StockTwits is becoming increasingly popular for users to share their discussions and sentiments about stocks and financial market. In this project, we analyze the text content of StockTwits tweets and extract financial sentiment using text featurization and machine learning algorithms. StockTwits tweets are first pre-processed using techniques including stopword removal, special character removal, and case normalization to remove noise. Features are extracted from these preprocessed tweets through text featurization process using bags of words, N-gram models, TF-IDF (term frequency-inverse document frequency), and latent semantic analysis. Machine learning models are then trained to classify the tweets' sentiment as positive (bullish) or negative (bearish). The correlation between the aggregated daily sentiment and daily stock price movement is then investigated using Pearson’s correlation coefficient. Finally, the sentiment information is applied together with time series stock data to predict stock price movement. The experiments on five companies (Apple, Amazon, General Electric, Microsoft, and Target) in a duration of nine months demonstrate the effectiveness of our study in improving the prediction accuracy.

Keywords: machine learning, sentiment analysis, stock price prediction, tweet processing

Procedia PDF Downloads 156
5482 Attribution Theory and Perceived Reliability of Cellphones for Teaching and Learning

Authors: Mayowa A. Sofowora, Seraphin D. Eyono Obono

Abstract:

The use of information and communication technologies such as computers, mobile phones and the internet is becoming prevalent in today’s world; and it is facilitating access to a vast amount of data, services, and applications for the improvement of people’s lives. However, this prevalence of ICTs is hampered by the problem of low income levels in developing countries to the point where people cannot timeously replace or repair their ICT devices when damaged or lost; and this problem serves as a motivation for this study whose aim is to examine the perceptions of teachers on the reliability of cellphones when used for teaching and learning purposes. The research objectives unfolding this aim are of two types: objectives on the selection and design of theories and models, and objectives on the empirical testing of these theories and models. The first type of objectives is achieved using content analysis in an extensive literature survey, and the second type of objectives is achieved through a survey of high school teachers from the ILembe and Umgungudlovu districts in the KwaZuluNatal province of South Africa. Data collected from this questionnaire based survey is analysed in SPSS using descriptive statistics and Pearson correlations after checking the reliability and validity of the questionnaire. The main hypothesis driving this study is that there is a relationship between the demographics and the attribution identity of teachers on one hand, and their perceptions on the reliability of cellphones on the other hand, as suggested by existing literature; except that attribution identities are considered in this study under three angles: intention, knowledge and ability, and action. The results of this study confirm that the perceptions of teachers on the reliability of cellphones for teaching and learning are affected by the school location of these teachers, and by their perceptions on learners’ cellphones usage intentions and actual use.

Keywords: attribution, cellphones, e-learning, reliability

Procedia PDF Downloads 403
5481 Findings on Modelling Carbon Dioxide Concentration Scenarios in the Nairobi Metropolitan Region before and during COVID-19

Authors: John Okanda Okwaro

Abstract:

Carbon (IV) oxide (CO₂) is emitted majorly from fossil fuel combustion and industrial production. The sources of interest of carbon (IV) oxide in the study area are mining activities, transport systems, and industrial processes. This study is aimed at building models that will help in monitoring the emissions within the study area. Three scenarios were discussed, namely: pessimistic scenario, business-as-usual scenario, and optimistic scenario. The result showed that there was a reduction in carbon dioxide concentration by approximately 50.5 ppm between March 2020 and January 2021 inclusive. This is majorly due to reduced human activities that led to decreased consumption of energy. Also, the CO₂ concentration trend follows the business-as-usual scenario (BAU) path. From the models, the pessimistic, business-as-usual, and optimistic scenarios give CO₂ concentration of about 545.9 ppm, 408.1 ppm, and 360.1 ppm, respectively, on December 31st, 2021. This research helps paint the picture to the policymakers of the relationship between energy sources and CO₂ emissions. Since the reduction in CO₂ emission was due to decreased use of fossil fuel as there was a decrease in economic activities, then if Kenya relies more on green energy than fossil fuel in the post-COVID-19 period, there will be more CO₂ emission reduction. That is, the CO₂ concentration trend is likely to follow the optimistic scenario path, hence a reduction in CO₂ concentration of about 48 ppm by the end of the year 2021. This research recommends investment in solar energy by energy-intensive companies, mine machinery and equipment maintenance, investment in electric vehicles, and doubling tree planting efforts to achieve the 10% cover.

Keywords: forecasting, greenhouse gas, green energy, hierarchical data format

Procedia PDF Downloads 168
5480 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies

Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey

Abstract:

Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.

Keywords: climate change, downscaling, GCM, RCM

Procedia PDF Downloads 406
5479 Globalization of Pesticide Technology and Sustainable Agriculture

Authors: Gagandeep Kaur

Abstract:

The pesticide industry is a big supplier of agricultural inputs. The uses of pesticides control weeds, fungal diseases, etc., which causes of yield losses in agricultural production. In agribusiness and agrichemical industry, Globalization of markets, competition and innovation are the dominant trends. By the tradition of increasing the productivity of agro-systems through generic, universally applicable technologies, innovation in the agrichemical industry is limited. The marketing of technology of agriculture needs to deal with some various trends such as locally-organized forces that envision regionalized sustainable agriculture in the future. Agricultural production has changed dramatically over the past century. Before World War second agricultural production was featured as a low input of money, high labor, mixed farming and low yields. Although mineral fertilizers were applied already in the second half of the 19th century, most f the crops were restricted by local climatic, geological and ecological conditions. After World War second, in the period of reconstruction, political and socioeconomic pressure changed the nature of agricultural production. For a growing population, food security at low prices and securing farmer income at acceptable levels became political priorities. Current agricultural policy the new European common agricultural policy is aimed to reduce overproduction, liberalization of world trade and the protection of landscape and natural habitats. Farmers have to increase the quality of their productivity and they have to control costs because of increased competition from the world market. Pesticides should be more effective at lower application doses, less toxic and not pose a threat to groundwater. There is a big debate taking place about how and whether to mitigate the intensive use of pesticides. This debate is about the future of agriculture which is sustainable agriculture. This is possible by moving away from conventional agriculture. Conventional agriculture is featured as high inputs and high yields. The use of pesticides in conventional agriculture implies crop production in a wide range. To move away from conventional agriculture is possible through the gradual adoption of less disturbing and polluting agricultural practices at the level of the cropping system. For a healthy environment for crop production in the future there is a need for the maintenance of chemical, physical or biological properties. There is also required to minimize the emission of volatile compounds in the atmosphere. Companies are limiting themselves to a particular interpretation of sustainable development, characterized by technological optimism and production-maximizing. So the main objective of the paper will present the trends in the pesticide industry and in agricultural production in the era of Globalization. The second objective is to analyze sustainable agriculture. Companies of pesticides seem to have identified biotechnology as a promising alternative and supplement to the conventional business of selling pesticides. The agricultural sector is in the process of transforming its conventional mode of operation. Some experts give suggestions to farmers to move towards precision farming and some suggest engaging in organic farming. The methodology of the paper will be historical and analytical. Both primary and secondary sources will be used.

Keywords: globalization, pesticides, sustainable development, organic farming

Procedia PDF Downloads 98
5478 CFD Analysis of the Blood Flow in Left Coronary Bifurcation with Variable Angulation

Authors: Midiya Khademi, Ali Nikoo, Shabnam Rahimnezhad Baghche Jooghi

Abstract:

Cardiovascular diseases (CVDs) are the main cause of death globally. Most CVDs can be prevented by avoiding habitual risk factors. Separate from the habitual risk factors, there are some inherent factors in each individual that can increase the risk potential of CVDs. Vessel shapes and geometry are influential factors, having great impact on the blood flow and the hemodynamic behavior of the vessels. In the present study, the influence of bifurcation angle on blood flow characteristics is studied. In order to approach this topic, by simplifying the details of the bifurcation, three models with angles 30°, 45°, and 60° were created, then by using CFD analysis, the response of these models for stable flow and pulsatile flow was studied. In the conducted simulation in order to eliminate the influence of other geometrical factors, only the angle of the bifurcation was changed and other parameters remained constant during the research. Simulations are conducted under dynamic and stable condition. In the stable flow simulation, a steady velocity of 0.17 m/s at the inlet plug was maintained and in dynamic simulations, a typical LAD flow waveform is implemented. The results show that the bifurcation angle has an influence on the maximum speed of the flow. In the stable flow condition, increasing the angle lead to decrease the maximum flow velocity. In the dynamic flow simulations, increasing the bifurcation angle lead to an increase in the maximum velocity. Since blood flow has pulsatile characteristics, using a uniform velocity during the simulations can lead to a discrepancy between the actual results and the calculated results.

Keywords: coronary artery, cardiovascular disease, bifurcation, atherosclerosis, CFD, artery wall shear stress

Procedia PDF Downloads 164
5477 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 339
5476 Modelling of Pipe Jacked Twin Tunnels in a Very Soft Clay

Authors: Hojjat Mohammadi, Randall Divito, Gary J. E. Kramer

Abstract:

Tunnelling and pipe jacking in very soft soils (fat clays), even with an Earth Pressure Balance tunnel boring machine (EPBM), can cause large ground displacements. In this study, the short-term and long-term ground and tunnel response is predicted for twin, pipe-jacked EPBM 3 meter diameter tunnels with a narrow pillar width. Initial modelling indicated complete closure of the annulus gap at the tail shield onto the centrifugally cast, glass-fiber-reinforced, polymer mortar jacking pipe (FRP). Numerical modelling was employed to simulate the excavation and support installation sequence, examine the ground response during excavation, confirm the adequacy of the pillar width and check the structural adequacy of the installed pipe. In the numerical models, Mohr-Coulomb constitutive model with the effect of unloading was adopted for the fat clays, while for the bedrock layer, the generalized Hoek-Brown was employed. The numerical models considered explicit excavation sequences and different levels of ground convergence prior to support installation. The well-studied excavation sequences made the analysis possible for this study on a very soft clay, otherwise, obtaining the convergency in the numerical analysis would be impossible. The predicted results indicate that the ground displacements around the tunnel and its effect on the pipe would be acceptable despite predictions of large zones of plastic behaviour around the tunnels and within the entire pillar between them due to excavation-induced ground movements.

Keywords: finite element modeling (FEM), pipe-jacked tunneling, very soft clay, EPBM

Procedia PDF Downloads 82
5475 Interactions between Residential Mobility, Car Ownership and Commute Mode: The Case for Melbourne

Authors: Solmaz Jahed Shiran, John Hearne, Tayebeh Saghapour

Abstract:

Daily travel behavior is strongly influenced by the location of the places of residence, education, and employment. Hence a change in those locations due to a move or changes in an occupation leads to a change in travel behavior. Given the interventions of housing mobility and travel behaviors, the hypothesis is that a mobile housing market allows households to move as a result of any change in their life course, allowing them to be closer to central services, public transport facilities and workplace and hence reducing the time spent by individuals on daily travel. Conversely, household’s immobility may lead to longer commutes of residents, for example, after a change of a job or a need for new services such as schools for children who have reached their school age. This paper aims to investigate the association between residential mobility and travel behavior. The Victorian Integrated Survey of Travel and Activity (VISTA) data is used for the empirical analysis. Car ownership and journey to work time and distance of employed people are used as indicators of travel behavior. Change of usual residence within the last five years used to identify movers and non-movers. Statistical analysis, including regression models, is used to compare the travel behavior of movers and non-movers. The results show travel time, and the distance does not differ for movers and non-movers. However, this is not the case when taking into account the residence tenure-type. In addition, car ownership rate and number found to be significantly higher for non-movers. It is hoped that the results from this study will contribute to a better understanding of factors other than common socioeconomic and built environment features influencing travel behavior.

Keywords: journey to work, regression models, residential mobility, commute mode, car ownership

Procedia PDF Downloads 134
5474 Architectural Visualization: From Ancient Civilizations to the Roman Empire

Authors: Matthias Stange

Abstract:

Architectural visualization has been practiced for as long as there have been buildings. Visualization (lat.: visibilis "visible") generally refers to bringing abstract data and relationships into a graphically, visually comprehensible form. Particularly, visualization refers to the process of translating relationships that are difficult to formulate linguistically or logically into visual media (e.g., drawings or models) to make them comprehensible. Building owners have always been interested in knowing how their building will look before it is built. In the empirical part of this study, the roots of architectural visualization are examined, starting from the ancient civilizations to the end of the Roman Empire. Extensive literature research on architectural theory and architectural history forms the basis for this analysis. The focus of the analysis is basic research from the emergence of the first two-dimensional drawings in the Neolithic period to the triggers of significant further developments of architectural representation, as well as their importance for subsequent methods and the transmission of knowledge over the following epochs. The analysis focuses on the development of analog methods of representation from the first Neolithic house floor plans to the Greek detailed stone models and paper drawings in the Roman Empire. In particular, the question of socio-cultural, socio-political, and economic changes as possible triggers for the development of representational media and methods will be analyzed. The study has shown that the development of visual building representation has been driven by scientific, technological, and social developments since the emergence of the first civilizations more than 6000 years ago first by the change in human’s subsistence strategy, from food appropriation by hunting and gathering to food production by agriculture and livestock, and the sedentary lifestyle required for this.

Keywords: ancient Greece, ancient orient, Roman Empire, architectural visualization

Procedia PDF Downloads 116
5473 Exploring Mechanical Properties of Additive Manufacturing Ceramic Components Across Techniques and Materials

Authors: Venkatesan Sundaramoorthy

Abstract:

The field of ceramics has undergone a remarkable transformation with the advent of additive manufacturing technologies. This comprehensive review explores the mechanical properties of additively manufactured ceramic components, focusing on key materials such as Alumina, Zirconia, and Silicon Carbide. The study delves into various authors' review technology into the various additive manufacturing techniques, including Stereolithography, Powder Bed Fusion, and Binder Jetting, highlighting their advantages and challenges. It provides a detailed analysis of the mechanical properties of these ceramics, offering insights into their hardness, strength, fracture toughness, and thermal conductivity. Factors affecting mechanical properties, such as microstructure and post-processing, are thoroughly examined. Recent advancements and future directions in 3D-printed ceramics are discussed, showcasing the potential for further optimization and innovation. This review underscores the profound implications of additive manufacturing for ceramics in industries such as aerospace, healthcare, and electronics, ushering in a new era of engineering and design possibilities for ceramic components.

Keywords: mechanical properties, additive manufacturing, ceramic materials, PBF

Procedia PDF Downloads 66
5472 The Appropriation of Education Policy on Information and Communication Technology in South African Schools

Authors: T. Vandeyar

Abstract:

The purpose of this study is to explore how Government policy on ICT influences teaching and learning in South African schools. An instrumental case study using backward mapping principles as a strategy of inquiry was used. Utilizing a social constructivist lens and guided by a theoretical framework of a sociocultural approach to policy analysis, this exploratory qualitative research study set out to investigate how teachers appropriate government policy on ICT in South African schools. Three major findings emanated from this study. First, although teachers were ignorant of the national e-education policy their professionalism and agency were key in formulating and implementing an e-education policy in practice. Second, teachers repositioned themselves not as recipients or reactors of the e-education policy but as social and cultural actors of policy appropriation and formulation. Third, the lack of systemic support to teachers catalyzed improved school and teacher collaborations, teachers became drivers of ICT integration through collaboration, innovation, institutional practice and institutional leadership.

Keywords: ICT, teachers as change agents, practice as policy, teacher's beliefs, teacher's attitudes

Procedia PDF Downloads 476
5471 Construction and Validation of a Hybrid Lumbar Spine Model for the Fast Evaluation of Intradiscal Pressure and Mobility

Authors: Dicko Ali Hamadi, Tong-Yette Nicolas, Gilles Benjamin, Faure Francois, Palombi Olivier

Abstract:

A novel hybrid model of the lumbar spine, allowing fast static and dynamic simulations of the disc pressure and the spine mobility, is introduced in this work. Our contribution is to combine rigid bodies, deformable finite elements, articular constraints, and springs into a unique model of the spine. Each vertebra is represented by a rigid body controlling a surface mesh to model contacts on the facet joints and the spinous process. The discs are modeled using a heterogeneous tetrahedral finite element model. The facet joints are represented as elastic joints with six degrees of freedom, while the ligaments are modeled using non-linear one-dimensional elastic elements. The challenge we tackle is to make these different models efficiently interact while respecting the principles of Anatomy and Mechanics. The mobility, the intradiscal pressure, the facet joint force and the instantaneous center of rotation of the lumbar spine are validated against the experimental and theoretical results of the literature on flexion, extension, lateral bending as well as axial rotation. Our hybrid model greatly simplifies the modeling task and dramatically accelerates the simulation of pressure within the discs, as well as the evaluation of the range of motion and the instantaneous centers of rotation, without penalizing precision. These results suggest that for some types of biomechanical simulations, simplified models allow far easier modeling and faster simulations compared to usual full-FEM approaches without any loss of accuracy.

Keywords: hybrid, modeling, fast simulation, lumbar spine

Procedia PDF Downloads 306
5470 Lessons of Passive Environmental Design in the Sarabhai and Shodan Houses by Le Corbusier

Authors: Juan Sebastián Rivera Soriano, Rosa Urbano Gutiérrez

Abstract:

The Shodan House and the Sarabhai House (Ahmedabad, India, 1954 and 1955, respectively) are considered some of the most important works of Le Corbusier produced in the last stage of his career. There are some academic publications that study the compositional and formal aspects of their architectural design, but there is no in-depth investigation into how the climatic conditions of this region were a determining factor in the design decisions implemented in these projects. This paper argues that Le Corbusier developed a specific architectural design strategy for these buildings based on scientific research on climate in the Indian context. This new language was informed by a pioneering study and interpretation of climatic data as a design methodology that would even involve the development of new design tools. This study investigated whether their use of climatic data meets values and levels of accuracy obtained with contemporary instruments and tools, such as Energy Plus weather data files and Climate Consultant. It also intended to find out if Le Corbusier's office’s intentions and decisions were indeed appropriate and efficient for those climate conditions by assessing these projects using BIM models and energy performance simulations from Design Builder. Accurate models were built using original historical data through archival research. The outcome is to provide a new understanding of the environment of these houses through the combination of modern building science and architectural history. The results confirm that in these houses, it was achieved a model of low energy consumption. This paper contributes new evidence not only on exemplary modern architecture concerned with environmental performance but also on how it developed progressive thinking in this direction.

Keywords: bioclimatic architecture, Le Corbusier, Shodan, Sarabhai Houses

Procedia PDF Downloads 65
5469 Effect of Planting Date on Quantitative and Qualitative Characteristics of Different Bread Wheat and Durum Cultivars

Authors: Mahdi Nasiri Tabrizi, A. Dadkhah, M. Khirkhah

Abstract:

In order to study the effect of planting on yield, yield components and quality traits in bread and durum wheat varieties, a field split-plot experiment based on complete randomized design with three replications was conducted in Agricultural and Natural Resources Research Center of Razavi Khorasan located in city of Mashhad during 2013-2014. Main factor were consisted of five sowing dates (first October, fifteenth December, first March, tenth March, twentieth March) and as sub-factors consisted of different bread wheat (Bahar, Pishgam, Pishtaz, Mihan, Falat and Karim) and two durum wheat (Dena and Dehdasht). According to results of analysis variance the effect of planting date was significant on all examined traits (grain yield, biological yield, harvest index, number of grain per spike, thousands kernel weight, number of spike per square meter, plant height, the number of days to heading, the number of days to maturity, during the grain filling period, percentage of wet gluten, percentage of dry gluten, gluten index, percentage of protein). By delay in planting, majority of traits significantly decreased, except quality traits (percentage of wet gluten, percentage of dry gluten and percentage of protein). Results of means comparison showed, among planting date the highest grain yield and biological yield were related to first planting date (Octobr) with mean of production of 5/6 and 1/17 tons per hectare respectively and the highest bread quality (gluten index) with mean of 85 and percentage of protein with mean of 13% to fifth planting date also the effect of genotype was significant on all traits. The highest grain yield among of studied wheat genotypes was related to Dehdasht cultivar with an average production of 4.4 tons per hectare. The highest protein percentage and bread quality (gluten index) were related to Dehdasht cultivar with 13.4% and Falat cultivar with number of 90 respectively. The interaction between cultivar and planting date was significant on all traits and different varieties had different trend for these traits. The highest grain yield was related to first planting date (October) and Falat cultivar with an average of production of 6/7 tons per hectare while in grain yield did not show a significant different with Pishtas and Mihan cultivars also the most of gluten index (bread quality index) and protein percentage was belonged to the third planting date and Karim cultivar with 7.98 and Dena cultivar with 7.14% respectively.

Keywords: yield component, yield, planting date, cultivar, quality traits, wheat

Procedia PDF Downloads 430
5468 Dissolution Kinetics of Chevreul’s Salt in Ammonium Cloride Solutions

Authors: Mustafa Sertçelik, Turan Çalban, Hacali Necefoğlu, Sabri Çolak

Abstract:

In this study, Chevreul’s salt solubility and its dissolution kinetics in ammonium chloride solutions were investigated. Chevreul’s salt that we used in the studies was obtained by using the optimum conditions (ammonium sulphide concentration; 0,4 M, copper sulphate concentration; 0,25 M, temperature; 60°C, stirring speed; 600 rev/min, pH; 4 and reaction time; 15 mins) determined by T. Çalban et al. Chevreul’s salt solubility in ammonium chloride solutions and the kinetics of dissolution were investigated. The selected parameters that affect solubility were reaction temperature, concentration of ammonium chloride, stirring speed, and solid/liquid ratio. Correlation of experimental results had been achieved using linear regression implemented in the statistical package program statistica. The effect of parameters on Chevreul’s salt solubility was examined and integrated rate expression of dissolution rate was found using kinetic models in solid-liquid heterogeneous reactions. The results revealed that the dissolution rate of Chevreul’s salt was decreasing while temperature, concentration of ammonium chloride and stirring speed were increasing. On the other hand, dissolution rate was found to be decreasing with the increase of solid/liquid ratio. Based on result of the applications of the obtained experimental results to the kinetic models, we can deduce that Chevreul’s salt dissolution rate is controlled by diffusion through the ash (or product layer). Activation energy of the reaction of dissolution was found as 74.83 kJ/mol. The integrated rate expression along with the effects of parameters on Chevreul's salt solubility was found to be as follows: 1-3(1-X)2/3+2(1-X)= [2,96.1013.(CA)3,08 .(S/L)-038.(W)1,23 e-9001,2/T].t

Keywords: Chevreul's salt, copper, ammonium chloride, ammonium sulphide, dissolution kinetics

Procedia PDF Downloads 308
5467 Analysis of Capillarity Phenomenon Models in Primary and Secondary Education in Spain: A Case Study on the Design, Implementation, and Analysis of an Inquiry-Based Teaching Sequence

Authors: E. Cascarosa-Salillas, J. Pozuelo-Muñoz, C. Rodríguez-Casals, A. de Echave

Abstract:

This study focuses on improving the understanding of the capillarity phenomenon among Primary and Secondary Education students. Despite being a common concept in daily life and covered in various subjects, students’ comprehension remains limited. This work explores inquiry-based teaching methods to build a conceptual foundation of capillarity by examining the forces involved. The study adopts an inquiry-based teaching approach supported by research emphasizing the importance of modeling in science education. Scientific modeling aids students in applying knowledge across varied contexts and developing systemic thinking, allowing them to construct scientific models applicable to everyday situations. This methodology fosters the development of scientific competencies such as observation, hypothesis formulation, and communication. The research was structured as a case study with activities designed for Spanish Primary and Secondary Education students aged 9 to 13. The process included curriculum analysis, the design of an activity sequence, and its implementation in classrooms. Implementation began with questions that students needed to resolve using available materials, encouraging observation, experimentation, and the re-contextualization of activities to everyday phenomena where capillarity is observed. Data collection tools included audio and video recordings of the sessions, which were transcribed and analyzed alongside the students' written work. Students' drawings on capillarity were also collected and categorized. Qualitative analyses of the activities showed that, through inquiry, students managed to construct various models of capillarity, reflecting an improved understanding of the phenomenon. Initial activities allowed students to express prior ideas and formulate hypotheses, which were then refined and expanded in subsequent sessions. The generalization and use of graphical representations of their ideas on capillarity, analyzed alongside their written work, enabled the categorization of capillarity models: Intuitive Model: A visual and straightforward representation without explanations of how or why it occurs. Simple symbolic elements, such as arrows to indicate water rising, are used without detailed or causal understanding. It reflects an initial, immediate perception of the phenomenon, interpreted as something that happens "on its own" without delving into the microscopic level. Explanatory Intuitive Model: Students begin to incorporate causal explanations, though still limited and without complete scientific accuracy. They represent the role of materials and use basic terms such as ‘absorption’ or ‘attraction’ to describe the rise of water. This model shows a more complex understanding where the phenomenon is not only observed but also partially explained in terms of interaction, though without microscopic detail. School Scientific Model: This model reflects a more advanced and detailed understanding. Students represent the phenomenon using specific scientific concepts like ‘surface tension,’ cohesion,’ and ‘adhesion,’ including structured explanations connecting microscopic and macroscopic levels. At this level, students model the phenomenon as a coherent system, demonstrating how various forces or properties interact in the capillarity process, with representations on a microscopic level. The study demonstrated that the capillarity phenomenon can be effectively approached in class through the experimental observation of everyday phenomena, explained through guided inquiry learning. The methodology facilitated students’ construction of capillarity models and served to analyze an interaction phenomenon of different forces occurring at the microscopic level.

Keywords: capillarity, inquiry-based learning, scientific modeling, primary and secondary education, conceptual understanding, Drawing analysis.

Procedia PDF Downloads 15
5466 Kýklos Dimensional Geometry: Entity Specific Core Measurement System

Authors: Steven D. P Moore

Abstract:

A novel method referred to asKýklos(Ky) dimensional geometry is proposed as an entity specific core geometric dimensional measurement system. Ky geometric measures can constructscaled multi-dimensionalmodels using regular and irregular sets in IRn. This entity specific-derived geometric measurement system shares similar fractal methods in which a ‘fractal transformation operator’ is applied to a set S to produce a union of N copies. The Kýklos’ inputs use 1D geometry as a core measure. One-dimensional inputs include the radius interval of a circle/sphere or the semiminor/semimajor axes intervals of an ellipse or spheroid. These geometric inputs have finite values that can be measured by SI distance units. The outputs for each interval are divided and subdivided 1D subcomponents with a union equal to the interval geometry/length. Setting a limit of subdivision iterations creates a finite value for each 1Dsubcomponent. The uniqueness of this method is captured by allowing the simplest 1D inputs to define entity specific subclass geometric core measurements that can also be used to derive length measures. Current methodologies for celestial based measurement of time, as defined within SI units, fits within this methodology, thus combining spatial and temporal features into geometric core measures. The novel Ky method discussed here offers geometric measures to construct scaled multi-dimensional structures, even models. Ky classes proposed for consideration include celestial even subatomic. The application of this offers incredible possibilities, for example, geometric architecture that can represent scaled celestial models that incorporates planets (spheroids) and celestial motion (elliptical orbits).

Keywords: Kyklos, geometry, measurement, celestial, dimension

Procedia PDF Downloads 166
5465 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 128
5464 Climate Changes in Albania and Their Effect on Cereal Yield

Authors: Lule Basha, Eralda Gjika

Abstract:

This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.

Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest

Procedia PDF Downloads 92
5463 A Comparison of Direct Water Injection with Membrane Humidifier for Proton Exchange Membrane Fuel Cells Humification

Authors: Flavien Marteau, Pedro Affonso Nóbrega, Pascal Biwole, Nicolas Autrusson, Iona De Bievre, Christian Beauger

Abstract:

Effective water management is essential for the optimal performance of fuel cells. For this reason, many vehicle systems use a membrane humidifier, a passive device that humidifies the air before the cathode inlet. Although they offer good performance, humidifiers are voluminous, costly, and fragile, hence the desire to find an alternative. Direct water injection could be an option, although this method lacks maturity. It consists of injecting liquid water as a spray in the dry heated air coming out from the compressor. This work focuses on the evaluation of direct water injection and its performance compared to the membrane humidifier selected as a reference. Two architectures were experimentally tested to humidify an industrial 2 kW short stack made up of 20 cells of 150 cm² each. For the reference architecture, the inlet air is humidified with a commercial membrane humidifier. For the direct water injection architecture, a pneumatic nozzle was selected to generate a fine spray in the air flow with a Sauter mean diameter of about 20 μm. Initial performance was compared over the entire range of current based on polarisation curves. Then, the influence of various parameters impacting water management was studied, such as the temperature, the gas stoichiometry, and the water injection flow rate. The experimental results obtained confirm the possibility of humidifying the fuel cell using direct water injection. This study, however shows the limits of this humidification method, the mean cell voltage being significantly lower in some operating conditions with direct water injection than with the membrane humidifier. The voltage drop reaches 30 mV per cell (4 %) at 1 A/cm² (1,8 bara, 80 °C) and increases in more demanding humidification conditions. It is noteworthy that the heat of compression available is not enough to evaporate all the injected liquid water in the case of DWI, resulting in a mix of liquid and vapour water entering the fuel cell, whereas only vapour is present with the humidifier. Variation of the injection flow rate shows that part of the injected water is useless for humidification and seems to cross channels without reaching the membrane. The stack was successfully humidified thanks to direct water injection. Nevertheless, our work shows that its implementation requires substantial adaptations and may reduce the fuel cell stack performance when compared to conventional membrane humidifiers, but opportunities for optimisation have been identified.

Keywords: cathode humidification, direct water injection, membrane humidifier, proton exchange membrane fuel cell

Procedia PDF Downloads 44
5462 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales

Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias

Abstract:

Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.

Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline

Procedia PDF Downloads 80
5461 Pedagogical Technologies of Teaching Natural Geography

Authors: Mirzahmedov Ismoiljon Karimjon Ugli, Juraeva Shakhnoza Abdumalik Kizi

Abstract:

The article deals with the current scientific problems of natural geography related to the development of new pedagogical technologies and their implementation in the educational process. The use of recommended interactive methods in independent study is considered very effective and is a very useful method for students, especially for students who work more on themselves. Today's demand is to make young people talented, intelligent, innovative, as well as mature and well-rounded individuals, as a result of the work carried out in the field of education today. This is how creating tables of different contents and filling them out shows the student's talent and desire for innovation. Also, the techniques and methods necessary for today's student are shown, the role of the teacher in conducting lessons meaningfully, the suitability of the method used by the teacher for the lesson, factors affecting the quality of education, and natural issues of the use of methods based on the specific features of geography are highlighted.

Keywords: teaching methods, educational process, educational technologies, education, problem, didactics, natural geography

Procedia PDF Downloads 66