Search results for: closed-list proportional representation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1713

Search results for: closed-list proportional representation

453 Perceptual and Ultrasound Articulatory Training Effects on English L2 Vowels Production by Italian Learners

Authors: I. Sonia d’Apolito, Bianca Sisinni, Mirko Grimaldi, Barbara Gili Fivela

Abstract:

The American English contrast /ɑ-ʌ/ (cop-cup) is difficult to be produced by Italian learners since they realize L2-/ɑ-ʌ/ as L1-/ɔ-a/ respectively, due to differences in phonetic-phonological systems and also in grapheme-to-phoneme conversion rules. In this paper, we try to answer the following research questions: Can a short training improve the production of English /ɑ-ʌ/ by Italian learners? Is a perceptual training better than an articulatory (ultrasound - US) training? Thus, we compare a perceptual training with an US articulatory one to observe: 1) the effects of short trainings on L2-/ɑ-ʌ/ productions; 2) if the US articulatory training improves the pronunciation better than the perceptual training. In this pilot study, 9 Salento-Italian monolingual adults participated: 3 subjects performed a 1-hour perceptual training (ES-P); 3 subjects performed a 1-hour US training (ES-US); and 3 control subjects did not receive any training (CS). Verbal instructions about the phonetic properties of L2-/ɑ-ʌ/ and L1-/ɔ-a/ and their differences (representation on F1-F2 plane) were provided during both trainings. After these instructions, the ES-P group performed an identification training based on the High Variability Phonetic Training procedure, while the ES-US group performed the articulatory training, by means of US video of tongue gestures in L2-/ɑ-ʌ/ production and dynamic view of their own tongue movements and position using a probe under their chin. The acoustic data were analyzed and the first three formants were calculated. Independent t-tests were run to compare: 1) /ɑ-ʌ/ in pre- vs. post-test respectively; /ɑ-ʌ/ in pre- and post-test vs. L1-/a-ɔ/ respectively. Results show that in the pre-test all speakers realize L2-/ɑ-ʌ/ as L1-/ɔ-a/ respectively. Contrary to CS and ES-P groups, the ES-US group in the post-test differentiates the L2 vowels from those produced in the pre-test as well as from the L1 vowels, although only one ES-US subject produces both L2 vowels accurately. The articulatory training seems more effective than the perceptual one since it favors the production of vowels in the correct direction of L2 vowels and differently from the similar L1 vowels.

Keywords: L2 vowel production, perceptual training, articulatory training, ultrasound

Procedia PDF Downloads 256
452 Identifying Large-Scale Photovoltaic and Concentrated Solar Power Hot Spots: Multi-Criteria Decision-Making Framework

Authors: Ayat-Allah Bouramdane

Abstract:

Solar Photovoltaic (PV) and Concentrated Solar Power (CSP) do not burn fossil fuels and, therefore, could meet the world's needs for low-carbon power generation as they do not release greenhouse gases into the atmosphere as they generate electricity. The power output of the solar PV module and CSP collector is proportional to the temperature and the amount of solar radiation received by their surface. Hence, the determination of the most convenient locations of PV and CSP systems is crucial to maximizing their output power. This study aims to provide a hands-on and plausible approach to the multi-criteria evaluation of site suitability of PV and CSP plants using a combination of Geographic Referenced Information (GRI) and Analytic Hierarchy Process (AHP). Applying the GRI-based AHP approach is meant to specify the criteria and sub-criteria, to identify the unsuitable areas, the low-, moderate-, high- and very high suitable areas for each layer of GRI, to perform the pairwise comparison matrix at each level of the hierarchy structure based on experts' knowledge, and calculate the weights using AHP to create the final map of solar PV and CSP plants suitability in Morocco with a particular focus on the Dakhla city. The results recognize that solar irradiation is the main decision factor for the integration of these technologies on energy policy goals of Morocco but explicitly account for other factors that cannot only limit the potential of certain locations but can even exclude the Dakhla city classified as unsuitable area. We discuss the sensitivity of the PV and CSP site suitability to different aspects, such as the methodology, the climate conditions, and the technology used in each source, and provide the final recommendations to the Moroccan energy strategy by analyzing if actual Morocco's PV and CSP installations are located within areas deemed suitable and by discussing several cases to provide mutual benefits across the Food-Energy-Water nexus. The adapted methodology and conducted suitability map could be used by researchers or engineers to provide helpful information for decision-makers in terms of sites selection, design, and planning of future solar plants, especially in areas suffering from energy shortages, such as the Dakhla city, which is now one of Africa's most promising investment hubs and it is especially attractive to investors looking to root their operations in Africa and import to European markets.

Keywords: analytic hierarchy process, concentrated solar power, dakhla, geographic referenced information, Morocco, multi-criteria decision-making, photovoltaic, site suitability

Procedia PDF Downloads 178
451 Screening Diversity: Artificial Intelligence and Virtual Reality Strategies for Elevating Endangered African Languages in the Film and Television Industry

Authors: Samuel Ntsanwisi

Abstract:

This study investigates the transformative role of Artificial Intelligence (AI) and Virtual Reality (VR) in the preservation of endangered African languages. The study is contextualized within the film and television industry, highlighting disparities in screen representation for certain languages in South Africa, underscoring the need for increased visibility and preservation efforts; with globalization and cultural shifts posing significant threats to linguistic diversity, this research explores approaches to language preservation. By leveraging AI technologies, such as speech recognition, translation, and adaptive learning applications, and integrating VR for immersive and interactive experiences, the study aims to create a framework for teaching and passing on endangered African languages. Through digital documentation, interactive language learning applications, storytelling, and community engagement, the research demonstrates how these technologies can empower communities to revitalize their linguistic heritage. This study employs a dual-method approach, combining a rigorous literature review to analyse existing research on the convergence of AI, VR, and language preservation with primary data collection through interviews and surveys with ten filmmakers. The literature review establishes a solid foundation for understanding the current landscape, while interviews with filmmakers provide crucial real-world insights, enriching the study's depth. This balanced methodology ensures a comprehensive exploration of the intersection between AI, VR, and language preservation, offering both theoretical insights and practical perspectives from industry professionals.

Keywords: language preservation, endangered languages, artificial intelligence, virtual reality, interactive learning

Procedia PDF Downloads 61
450 Layer-Level Feature Aggregation Network for Effective Semantic Segmentation of Fine-Resolution Remote Sensing Images

Authors: Wambugu Naftaly, Ruisheng Wang, Zhijun Wang

Abstract:

Models based on convolutional neural networks (CNNs), in conjunction with Transformer, have excelled in semantic segmentation, a fundamental task for intelligent Earth observation using remote sensing (RS) imagery. Nonetheless, tokenization in the Transformer model undermines object structures and neglects inner-patch local information, whereas CNNs are unable to simulate global semantics due to limitations inherent in their convolutional local properties. The integration of the two methodologies facilitates effective global-local feature aggregation and interactions, potentially enhancing segmentation results. Inspired by the merits of CNNs and Transformers, we introduce a layer-level feature aggregation network (LLFA-Net) to address semantic segmentation of fine-resolution remote sensing (FRRS) images for land cover classification. The simple yet efficient system employs a transposed unit that hierarchically utilizes dense high-level semantics and sufficient spatial information from various encoder layers through a layer-level feature aggregation module (LLFAM) and models global contexts using structured Transformer blocks. Furthermore, the decoder aggregates resultant features to generate rich semantic representation. Extensive experiments on two public land cover datasets demonstrate that our proposed framework exhibits competitive performance relative to the most recent frameworks in semantic segmentation.

Keywords: land cover mapping, semantic segmentation, remote sensing, vision transformer networks, deep learning

Procedia PDF Downloads 4
449 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa

Authors: Samy A. Khalil, U. Ali Rahoma

Abstract:

The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.

Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa

Procedia PDF Downloads 98
448 Investigating Students' Understanding about Mathematical Concept through Concept Map

Authors: Rizky Oktaviana

Abstract:

The main purpose of studying lies in improving students’ understanding. Teachers usually use written test to measure students’ understanding about learning material especially mathematical learning material. This common method actually has a lack point, such that in mathematics content, written test only show procedural steps to solve mathematical problems. Therefore, teachers unable to see whether students actually understand about mathematical concepts and the relation between concepts or not. One of the best tools to observe students’ understanding about the mathematical concepts is concept map. The goal of this research is to describe junior high school students understanding about mathematical concepts through Concept Maps based on the difference of mathematical ability. There were three steps in this research; the first step was choosing the research subjects by giving mathematical ability test to students. The subjects of this research are three students with difference mathematical ability, high, intermediate and low mathematical ability. The second step was giving concept mapping training to the chosen subjects. The last step was giving concept mapping task about the function to the subjects. Nodes which are the representation of concepts of function were provided in concept mapping task. The subjects had to use the nodes in concept mapping. Based on data analysis, the result of this research shows that subject with high mathematical ability has formal understanding, due to that subject could see the connection between concepts of function and arranged the concepts become concept map with valid hierarchy. Subject with intermediate mathematical ability has relational understanding, because subject could arranged all the given concepts and gave appropriate label between concepts though it did not represent the connection specifically yet. Whereas subject with low mathematical ability has poor understanding about function, it can be seen from the concept map which is only used few of the given concepts because subject could not see the connection between concepts. All subjects have instrumental understanding for the relation between linear function concept, quadratic function concept and domain, co domain, range.

Keywords: concept map, concept mapping, mathematical concepts, understanding

Procedia PDF Downloads 271
447 An Integrated Label Propagation Network for Structural Condition Assessment

Authors: Qingsong Xiong, Cheng Yuan, Qingzhao Kong, Haibei Xiong

Abstract:

Deep-learning-driven approaches based on vibration responses have attracted larger attention in rapid structural condition assessment while obtaining sufficient measured training data with corresponding labels is relevantly costly and even inaccessible in practical engineering. This study proposes an integrated label propagation network for structural condition assessment, which is able to diffuse the labels from continuously-generating measurements by intact structure to those of missing labels of damage scenarios. The integrated network is embedded with damage-sensitive features extraction by deep autoencoder and pseudo-labels propagation by optimized fuzzy clustering, the architecture and mechanism which are elaborated. With a sophisticated network design and specified strategies for improving performance, the present network achieves to extends the superiority of self-supervised representation learning, unsupervised fuzzy clustering and supervised classification algorithms into an integration aiming at assessing damage conditions. Both numerical simulations and full-scale laboratory shaking table tests of a two-story building structure were conducted to validate its capability of detecting post-earthquake damage. The identifying accuracy of a present network was 0.95 in numerical validations and an average 0.86 in laboratory case studies, respectively. It should be noted that the whole training procedure of all involved models in the network stringently doesn’t rely upon any labeled data of damage scenarios but only several samples of intact structure, which indicates a significant superiority in model adaptability and feasible applicability in practice.

Keywords: autoencoder, condition assessment, fuzzy clustering, label propagation

Procedia PDF Downloads 97
446 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record

Authors: Raghavi C. Janaswamy

Abstract:

In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.

Keywords: electronic health record, graph neural network, heterogeneous data, prediction

Procedia PDF Downloads 86
445 Bayesian System and Copula for Event Detection and Summarization of Soccer Videos

Authors: Dhanuja S. Patil, Sanjay B. Waykar

Abstract:

Event detection is a standout amongst the most key parts for distinctive sorts of area applications of video data framework. Recently, it has picked up an extensive interest of experts and in scholastics from different zones. While detecting video event has been the subject of broad study efforts recently, impressively less existing methodology has considered multi-model data and issues related efficiency. Start of soccer matches different doubtful circumstances rise that can't be effectively judged by the referee committee. A framework that checks objectively image arrangements would prevent not right interpretations because of some errors, or high velocity of the events. Bayesian networks give a structure for dealing with this vulnerability using an essential graphical structure likewise the probability analytics. We propose an efficient structure for analysing and summarization of soccer videos utilizing object-based features. The proposed work utilizes the t-cherry junction tree, an exceptionally recent advancement in probabilistic graphical models, to create a compact representation and great approximation intractable model for client’s relationships in an interpersonal organization. There are various advantages in this approach firstly; the t-cherry gives best approximation by means of junction trees class. Secondly, to construct a t-cherry junction tree can be to a great extent parallelized; and at last inference can be performed utilizing distributed computation. Examination results demonstrates the effectiveness, adequacy, and the strength of the proposed work which is shown over a far reaching information set, comprising more soccer feature, caught at better places.

Keywords: summarization, detection, Bayesian network, t-cherry tree

Procedia PDF Downloads 326
444 Effect of Naphtha in Addition to a Cycle Steam Stimulation Process Reducing the Heavy Oil Viscosity Using a Two-Level Factorial Design

Authors: Nora A. Guerrero, Adan Leon, María I. Sandoval, Romel Perez, Samuel Munoz

Abstract:

The addition of solvents in cyclic steam stimulation is a technique that has shown an impact on the improved recovery of heavy oils. In this technique, it is possible to reduce the steam/oil ratio in the last stages of the process, at which time this ratio increases significantly. The mobility of improved crude oil increases due to the structural changes of its components, which at the same time reflected in the decrease in density and viscosity. In the present work, the effect of the variables such as temperature, time, and weight percentage of naphtha was evaluated, using a factorial design of experiments 23. From the results of analysis of variance (ANOVA) and Pareto diagram, it was possible to identify the effect on viscosity reduction. The experimental representation of the crude-vapor-naphtha interaction was carried out in a batch reactor on a Colombian heavy oil of 12.8° API and 3500 cP. The conditions of temperature, reaction time, and percentage of naphtha were 270-300 °C, 48-66 hours, and 3-9% by weight, respectively. The results showed a decrease in density with values in the range of 0.9542 to 0.9414 g/cm³, while the viscosity decrease was in the order of 55 to 70%. On the other hand, simulated distillation results, according to ASTM 7169, revealed significant conversions of the 315°C+ fraction. From the spectroscopic techniques of nuclear magnetic resonance NMR, infrared FTIR and UV-VIS visible ultraviolet, it was determined that the increase in the performance of the light fractions in the improved crude is due to the breakdown of alkyl chains. The methodology for cyclic steam injection with naphtha and laboratory-scale characterization can be considered as a practical tool in improved recovery processes.

Keywords: viscosity reduction, cyclic steam stimulation, factorial design, naphtha

Procedia PDF Downloads 175
443 Sustainable Hydrogen Generation via Gasification of Pig Hair Biowaste with NiO/Al₂O₃ Catalysts

Authors: Jamshid Hussain, Kuen Song Lin

Abstract:

Over one thousand tons of pig hair biowaste (PHB) are produced yearly in Taiwan. The improper disposal of PHB can have a negative impact on the environment, consequently contributing to the spread of diseases. The treatment of PHB has become a major environmental and economic challenge. Innovative treatments must be developed because of the heavy metal and sulfur content of PHB. Like most organic materials, PHB is composed of many organic volatiles that contain large amounts of hydrogen. Hydrogen gas can be effectively produced by the catalytic gasification of PHB using a laboratory-scale fixed-bed gasifier, employing 15 wt% NiO/Al₂O₃ catalyst at 753–913 K. The derived kinetic parameters were obtained and refined using simulation calculations. FE–SEM microphotograph showed that NiO/Al₂O₃ catalyst particles are Spherical or irregularly shaped with diameters of 10–20 nm. HR–TEM represented that the fresh Ni particles were evenly dispersed and uniform in the microstructure of Al₂O₃ support. The sizes of the NiO nanoparticles were vital in determining catalyst activity. As displayed in the pre-edge XANES spectra of the NiO/Al₂O₃ catalysts, it exhibited a non-intensive absorbance nature for the 1s to 3d transition, which is prohibited by the selection rule for an ideal octahedral symmetry. Similarly, the populace of Ni(II) and Ni(0) onto Al₂O₃ supports are proportional to the strength of the 1s to 4pxy transition, respectively. The weak shoulder at 8329–8334 eV and a strong character at 8345–8353 eV were ascribed to the 1s to 4pxy shift, which suggested the presence of NiO types onto Al₂O₃ support in PHB catalytic gasification. As determined by the XANES analyses, Ni(II)→Ni(0) reduction was mostly observed. The oxidation of PHB onto the NiO/Al₂O₃ surface may have resulted in Ni(0) and the formation of tar during the gasification process. The EXAFS spectra revealed that the Ni atoms with Ni–Ni/Ni–O bonds were found. The Ni–O bonding proved that the produced syngas were unable to reduce NiO to Ni(0) completely. The weakness of the Ni–Ni bonds may have been caused by the highly dispersed Ni in the Al₂O₃ support. The central Ni atoms have Ni–O (2.01 Å) and Ni–Ni (2.34 Å) bond distances in the fresh NiO/Al₂O₃ catalyst. The PHB was converted into hydrogen-rich syngas (CO + H₂, >89.8% dry basis). When PHB (250 kg h−1) was catalytically gasified at 753–913 K, syngas was produced at approximately 5.45 × 105 kcal h−1 of heat recovery with 76.5%–83.5% cold gas efficiency. The simulation of the pilot-scale PHB catalytic gasification demonstrated that the system could provide hydrogen (purity > 99.99%) and generate electricity for an internal combustion engine of 100 kW and a proton exchange membrane fuel cell (PEMFC) of 175 kW. A projected payback for a PHB catalytic gasification plant with a capacity of 10- or 20-TPD (ton per day) was around 3.2 or 2.5 years, respectively.

Keywords: pig hair biowaste, catalytic gasification, hydrogen production, PEMFC, resource recovery

Procedia PDF Downloads 13
442 User Expectations and Opinions Related to Campus Wayfinding and Signage Design: A Case Study of Kastamonu University

Authors: Güllü Yakar, Adnan Tepecik

Abstract:

A university campus resembles an independent city that is spread over a wide area. Campuses that incorporate thousands of new domestic and international users at the beginning of every academic period also host scientific, cultural and sportive events, in addition to embodying regular users such as students and staff. Wayfinding and signage systems are necessary for the regulation of vehicular traffic, and they enable users’ to navigate without losing time or feeling anxiety. While designing the system or testing the functionality of it, opinions of existing users or likely behaviors of typical user profiles (personas) provide designers with insight. The purpose of this study is to identify the wayfinding attitudes and expectations of Kastamonu University Kuzeykent Campus users. This study applies a mixed method in which a questionnaire, developed by the researcher, constitute the quantitative phase of the study. The survey was carried out with 850 participants who filled a questionnaire form which was tested in terms of construct validity by using Exploratory Factor Analysis. While interpreting the data obtained, Chi-Square, T- Test and ANOVA analyses were applied as well as descriptive analyses such as frequency (f) and percentage (%) values. The results of this survey, which was conducted during the absence of systematic wayfinding signs in the campus, reveals the participants expectations for insertion of floor plans and wayfinding signs to indoors, maps to outdoors, symbols and color codes to the existing signs and for the adequate arrangement of those for the use of visually impaired people. The fact that there is a direct proportional relation between the length of institution membership and wayfinding competency within campus, leads to the conclusion that especially the new comers are in need of wayfinding signs. In order to determine the effectiveness of campus-wide wayfinding system implemented after the survey and in order to identify the further expectations of users in this respect, a semi-structured interview form developed by the researcher and assessments of 20 participants are compiled. Subjected to content analysis, this data constitute the qualitative dimension of the study. Research results indicate that despite the presence of the signs, the participants experienced either inability or stress while finding their way, showed tendency to receive help from others and needed outdoor maps and signs, in addition to bigger-sized texts.

Keywords: environmental graphic design, environmental perception, wayfinding and signage design, wayfinding system

Procedia PDF Downloads 237
441 Planning Urban Sprawl in Mining Areas in Africa: How to Ensure Coherent Development

Authors: Pascal Rey, Anaïs Weber

Abstract:

Many mining projects are being developed in Africa the last decades. Due to the economic opportunities they offer, these projects result in a massive and rapid influx of migrants to the surrounding area. In areas where central government representation is low and local administration lack financial resources, urban development is often anarchical, beyond all public control. It leads to socio-spatial segregation, insecurity and the risk of social conflicts rising. Aware that their economic development is very correlated with local situation, mining companies get more and more involved in regional planning in setting up tools and Strategic Directions document. One of the commonly used tools in this regard is the “Influx Management Plan”. It consists in looking at the region’s absorption capacities in order to ensure its coherent development and by developing several urban centers than one macrocephalic city. It includes many other measures such as urban governance support, skills transfer, creation of strategic guidelines, financial support (local taxes, mining taxes, development funds etc.) local development projects. Through various examples of mining projects in Guinea, A country that is host to many large mining projects, we will look at the implications of regional and urban planning of which mining companies are key playor as well as public authorities. While their investment capacity offers advantages and accelerates development, their actions raise questions of the unilaterality of interests and local governance. By interfering in public affairs are mining companies not increasing the risk of central and local government shirking their responsibilities in terms of regional development, or even calling their legitimacy into question? Is such public-private collaboration really sustainable for the region as a whole and for all stakeholders?

Keywords: Africa, guinea, mine, urban planning

Procedia PDF Downloads 498
440 Usability Evaluation of Four Big e-Commerce Websites in Indonesia

Authors: Harry B. Santoso, Lia Sadita, Firlia Sandyta, Musa Alfatih, Nove Spalo, Nu'man Naufal, Nuryahya P. Utomo, Putu A. Paramatha, Rezka Aufar Leonandya, Tommy Anugrah, Aulia Chairunisa, M. Fadly Uzzaki, Riandy D. Banimahendra

Abstract:

The numbers of Internet active users in Indonesia reach out over 88.1 million, where 48% of them are daily active users. Seeing these numbers, it is the best opportunity for IT companies to grow their business, especially e-Commerce. In fact, the growth of e-Commerce companies in Indonesia is proportional with internet daily active users. This phenomenon shows that competition happening among the e-Commerce companies is raising high. It triggers many e-Commerce companies to improve their services. The authors hypothesized that one of the best ways to improve the services is by improving their usability. So, the authors had done a study to evaluate and find out ways to improve usability of those e-Commerce websites. The authors chose four e-Commerce websites which each of them has different business focus and profiles. Each company is labeled as A, B, C, and D. Company A is a fashion-based e-Commerce services with two-million desktop visits Indonesia. Company B is an international online shopping mall for everyday appliances with 48,3-million desktop visits in Indonesia. Company C is a localized online shopping mall with 3,2-million desktop visits in Indonesia. Company D is an online shopping mall with one-million desktop visits in Indonesia. Writers used popular web traffic analytics platform to gain the numbers. There are some approaches to evaluate the usability of e-Commerce websites. In this study, the authors used usability testing method supported by the User Experience Questionnaire. This method involved the user in interacting directly with the services provided by the e-Commerce company. This study was conducted within two months including preparation, data collection, data analysis, and reporting. We used a pair of computers, a screen-capture video application named Smartboard, and User Experience Questionnaire. A team was built to conduct this study. They consisted of one supervisor, two assistants, four facilitators and four observers. For each e-Commerce, three users aged 17-25 years old were invited to do five task scenarios. Data collected in this study included demographic information of the users, usability testing results, and users’ responses to the questionnaire. Some findings were revealed from the usability testing and the questionnaire. Compared to the other three companies, Company D had the least score for the experiences. One of the most painful issues figured out by the authors from the evaluation was most users claimed feeling confused by user interfaces in these e-Commerce websites. We believe that this study will help e-Commerce companies to improve their services and business in the future.

Keywords: e-commerce, evaluation, usability testing, user experience

Procedia PDF Downloads 317
439 An Exploratory Sequential Design: A Mixed Methods Model for the Statistics Learning Assessment with a Bayesian Network Representation

Authors: Zhidong Zhang

Abstract:

This study established a mixed method model in assessing statistics learning with Bayesian network models. There are three variants in exploratory sequential designs. There are three linked steps in one of the designs: qualitative data collection and analysis, quantitative measure, instrument, intervention, and quantitative data collection analysis. The study used a scoring model of analysis of variance (ANOVA) as a content domain. The research study is to examine students’ learning in both semantic and performance aspects at fine grain level. The ANOVA score model, y = α+ βx1 + γx1+ ε, as a cognitive task to collect data during the student learning process. When the learning processes were decomposed into multiple steps in both semantic and performance aspects, a hierarchical Bayesian network was established. This is a theory-driven process. The hierarchical structure was gained based on qualitative cognitive analysis. The data from students’ ANOVA score model learning was used to give evidence to the hierarchical Bayesian network model from the evidential variables. Finally, the assessment results of students’ ANOVA score model learning were reported. Briefly, this was a mixed method research design applied to statistics learning assessment. The mixed methods designs expanded more possibilities for researchers to establish advanced quantitative models initially with a theory-driven qualitative mode.

Keywords: exploratory sequential design, ANOVA score model, Bayesian network model, mixed methods research design, cognitive analysis

Procedia PDF Downloads 179
438 Classification on Statistical Distributions of a Complex N-Body System

Authors: David C. Ni

Abstract:

Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.

Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification

Procedia PDF Downloads 309
437 Comparison of E-learning and Face-to-Face Learning Models Through the Early Design Stage in Architectural Design Education

Authors: Gülay Dalgıç, Gildis Tachir

Abstract:

Architectural design studios are ambiencein where architecture design is realized as a palpable product in architectural education. In the design studios that the architect candidate will use in the design processthe information, the methods of approaching the design problem, the solution proposals, etc., are set uptogetherwith the studio coordinators. The architectural design process, on the other hand, is complex and uncertain.Candidate architects work in a process that starts with abstre and ill-defined problems. This process starts with the generation of alternative solutions with the help of representation tools, continues with the selection of the appropriate/satisfactory solution from these alternatives, and then ends with the creation of an acceptable design/result product. In the studio ambience, many designs and thought relationships are evaluated, the most important step is the early design phase. In the early design phase, the first steps of converting the information are taken, and converted information is used in the constitution of the first design decisions. This phase, which positively affects the progress of the design process and constitution of the final product, is complex and fuzzy than the other phases of the design process. In this context, the aim of the study is to investigate the effects of face-to-face learning model and e-learning model on the early design phase. In the study, the early design phase was defined by literature research. The data of the defined early design phase criteria were obtained with the feedback graphics created for the architect candidates who performed e-learning in the first year of architectural education and continued their education with the face-to-face learning model. The findings of the data were analyzed with the common graphics program. It is thought that this research will contribute to the establishment of a contemporary architectural design education model by reflecting the evaluation of the data and results on architectural education.

Keywords: education modeling, architecture education, design education, design process

Procedia PDF Downloads 138
436 Pressure-Robust Approximation for the Rotational Fluid Flow Problems

Authors: Medine Demir, Volker John

Abstract:

Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.

Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces

Procedia PDF Downloads 67
435 Alignment and Antagonism in Flux: A Diachronic Sentiment Analysis of Attitudes towards the Chinese Mainland in the Hong Kong Press

Authors: William Feng, Qingyu Gao

Abstract:

Despite the extensive discussions about Hong Kong’s sentiments towards the Chinese Mainland since the sovereignty transfer in 1997, there has been no large-scale empirical analysis of the changing attitudes in the mainstream media, which both reflect and shape sentiments in the society. To address this gap, the present study uses an optimised semantic-based automatic sentiment analysis method to examine a corpus of news about China from 1997 to 2020 in three main Chinese-language newspapers in Hong Kong, namely Apple Daily, Ming Pao, and Oriental Daily News. The analysis shows that although the Hong Kong press had a positive emotional tone toward China in general, the overall trend of sentiment was becoming increasingly negative. Meanwhile, the alignment and antagonism toward China have both increased, providing empirical evidence of attitudinal polarisation in the Hong Kong society. Specifically, Apple Daily’s depictions of China have become increasingly negative, though with some positive turns before 2008, whilst Oriental Daily News has consistently expressed more favourable sentiments. Ming Pao maintained an impartial stance toward China through an increased but balanced representation of positive and negative sentiments, with its subjectivity and sentiment intensity growing to an industry-standard level. The results provide new insights into the complexity of sentiments towards China in the Hong Kong press and media attitudes in general in terms of the “us” and “them” positioning by explicating the cross-newspaper and cross-period variations using an enhanced sentiment analysis method which incorporates sentiment-oriented and semantic role analysis techniques.

Keywords: media attitude, sentiment analysis, Hong Kong press, one country two systems

Procedia PDF Downloads 121
434 Academic Goal Setting Practices of University Students in Lagos State, Nigeria: Implications for Counselling

Authors: Asikhia Olubusayo Aduke

Abstract:

Students’ inability to set data-based (specific, measurable, attainable, reliable, and time-bound) personal improvement goals threatens their academic success. Hence, the study aimed to investigate year-one students’ academic goal-setting practices at Lagos State University of Education, Nigeria. Descriptive survey research was used in carrying out this study. The study population consisted of 3,101 year-one students of the University. A sample size of five hundred (501) participants was selected through a proportional and simple random sampling technique. The Formative Goal Setting Questionnaire (FGSQ) developed by Research Collaboration (2015) was adapted and used as an instrument for the study. Two main research questions were answered, while two null hypotheses were formulated and tested for the study. The study revealed higher data-based goals for all students than personal improvement goals. Nevertheless, data-based and personal improvement goal-setting for female students was higher than for male students. One sample test statistic and Anova used to analyse data for the two hypotheses also revealed that the mean difference between male and female year one students’ data-based and personal improvement goal-setting formation was statistically significant (p < 0.05). This means year one students’ data-based and personal improvement goals showed significant gender differences. Based on the findings of this study, it was recommended, among others, that therapeutic techniques that can help to change students’ faulty thinking and challenge their lack of desire for personal improvement should be sought to treat students who have problems with setting high personal improvement goals. Counsellors also need to advocate continued research into how to increase the goal-setting ability of male students and should focus more on counselling male students’ goal-setting ability. The main contributions of the study are higher institutions must prioritize early intervention in first-year students' academic goal setting. Researching gender differences in this practice reveals a crucial insight: male students often lag behind in setting meaningful goals, impacting their motivation and performance. Focusing on this demographic with data-driven personal improvement goals can be transformative. By promoting goal setting that is specific, measurable, and focused on self-growth (rather than competition), male students can unlock their full potential. Researchers and counselors play a vital role in detecting and supporting students with lower goal-setting tendencies. By prioritizing this intervention, we can empower all students to set ambitious, personalized goals that ignite their passion for learning and pave the way for academic success.

Keywords: academic goal setting, counselling, practice, university, year one students

Procedia PDF Downloads 62
433 Khilafat from Khilafat-e-Rashida: The Rightly Guided the Only Form of Governance to Unite Muslim Countries

Authors: Zoaib Mirza

Abstract:

Half of the Muslim countries in the world have declared Islam the state religion in their constitutions. Yet, none of these countries have implemented authentic Islamic laws in line with the Quran (Holy Book), practices of Prophet Mohammad (P.B.U.H) called the Sunnah, and his four successors known as the Rightly Guided - Khalifa. Since their independence, these countries have adopted different government systems like Democracy, Dictatorship, Republic, Communism, and Monarchy. Instead of benefiting the people, these government systems have put these countries into political, social, and economic crises. These Islamic countries do not have equal representation and membership in worldwide political forums. Western countries lead these forums. Therefore, it is now imperative for the Muslim leaders of all these countries to collaborate, reset, and implement the original Islamic form of government, which led to the prosperity and success of people, including non-Muslims, 1400 years ago. They should unite as one nation under Khalifat, which means establishing the authority of Allah (SWT) and following the divine commandments related to the social, political, and economic systems. As they have declared Islam in their constitution, they should work together to apply the divine framework of the governance revealed by Allah (SWT) and implemented by Prophet Mohammad (P.B.U.H) and his four successors called Khalifas. This paper provides an overview of the downfall and the end of the Khalifat system by 1924, the ways in which the West caused political, social, and economic crises in the Muslim countries, and finally, a summary of the social, political, and economic systems implemented by the Prophet Mohammad (P.B.U.H) and his successors, Khalifas, called the Rightly Guided – Hazrat Abu Bakr (RA), Hazrat Omar (RA), Hazrat Usman (RA), and Hazrat Ali (RA).

Keywords: khalifat, khilafat-e-Rashida, the rightly guided, colonization, capitalism, neocolonization, government systems

Procedia PDF Downloads 120
432 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa

Authors: Davies Obaromi, Qin Yongsong, James Ndege

Abstract:

To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.

Keywords: linear, biharmonic splines, tuberculosis, South Africa

Procedia PDF Downloads 239
431 Fluid Structure Interaction Study between Ahead and Angled Impact of AGM 88 Missile Entering Relatively High Viscous Fluid for K-Omega Turbulence Model

Authors: Abu Afree Andalib, Rafiur Rahman, Md Mezbah Uddin

Abstract:

The main objective of this work is to anatomize on the various parameters of AGM 88 missile anatomized using FSI module in Ansys. Computational fluid dynamics is used for the study of fluid flow pattern and fluidic phenomenon such as drag, pressure force, energy dissipation and shockwave distribution in water. Using finite element analysis module of Ansys, structural parameters such as stress and stress density, localization point, deflection, force propagation is determined. Separate analysis on structural parameters is done on Abacus. State of the art coupling module is used for FSI analysis. Fine mesh is considered in every case for better result during simulation according to computational machine power. The result of the above-mentioned parameters is analyzed and compared for two phases using graphical representation. The result of Ansys and Abaqus are also showed. Computational Fluid Dynamics and Finite Element analyses and subsequently the Fluid-Structure Interaction (FSI) technique is being considered. Finite volume method and finite element method are being considered for modelling fluid flow and structural parameters analysis. Feasible boundary conditions are also utilized in the research. Significant change in the interaction and interference pattern while the impact was found. Theoretically as well as according to simulation angled condition was found with higher impact.

Keywords: FSI (Fluid Surface Interaction), impact, missile, high viscous fluid, CFD (Computational Fluid Dynamics), FEM (Finite Element Analysis), FVM (Finite Volume Method), fluid flow, fluid pattern, structural analysis, AGM-88, Ansys, Abaqus, meshing, k-omega, turbulence model

Procedia PDF Downloads 467
430 Automatic Near-Infrared Image Colorization Using Synthetic Images

Authors: Yoganathan Karthik, Guhanathan Poravi

Abstract:

Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.

Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data

Procedia PDF Downloads 44
429 Extraction and Electrochemical Behaviors of Au(III) using Phosphonium-Based Ionic Liquids

Authors: Kyohei Yoshino, Masahiko Matsumiya, Yuji Sasaki

Abstract:

Recently, studies have been conducted on Au(III) extraction using ionic liquids (ILs) as extractants or diluents. ILs such as piperidinium, pyrrolidinium, and pyridinium have been studied as extractants for noble metal extractions. Furthermore, the polarity, hydrophobicity, and solvent miscibility of these ILs can be adjusted depending on their intended use. Therefore, the unique properties of ILs make them functional extraction media. The extraction mechanism of Au(III) using phosphonium-based ILs and relevant thermodynamic studies are yet to be reported. In the present work, we focused on the mechanism of Au(III) extraction and related thermodynamic analyses using phosphonium-based ILs. Triethyl-n-pentyl, triethyl-n-octyl, and triethyl-n-dodecyl phosphonium bis(trifluoromethyl-sulfonyl)amide, [P₂₂₂ₓ][NTf₂], (X = 5, 8, and 12) were investigated for Au(III) extraction. The IL–Au complex was identified as [P₂₂₂₅][AuCl₄] using UV–Vis–NIR and Raman spectroscopic analyses. The extraction behavior of Au(III) was investigated with a change in the [P₂₂₂ₓ][NTf₂]IL concentration from 1.0 × 10–4 to 1.0 × 10–1 mol dm−3. The results indicate that Au(III) can be easily extracted by the anion-exchange reaction in the [P₂₂₂ₓ][NTf₂]IL. The slope range 0.96–1.01 on the plot of log D vs log[P₂₂₂ₓ][NTf2]IL indicates the association of one mole of IL with one mole of [AuCl4−] during extraction. Consequently, [P₂₂₂ₓ][NTf₂] is an anion-exchange extractant for the extraction of Au(III) in the form of anions from chloride media. Thus, this type of phosphonium-based IL proceeds via an anion exchange reaction with Au(III). In order to evaluate the thermodynamic parameters on the Au(III) extraction, the equilibrium constant (logKₑₓ’) was determined from the temperature dependence. The plot of the natural logarithm of Kₑₓ’ vs the inverse of the absolute temperature (T–1) yields a slope proportional to the enthalpy (ΔH). By plotting T–1 vs lnKₑₓ’, a line with a slope range 1.129–1.421 was obtained. Thus, the result indicated that the extraction reaction of Au(III) using the [P₂₂₂ₓ][NTf₂]IL (X=5, 8, and 12) was exothermic (ΔH=-9.39〜-11.81 kJ mol-1). The negative value of TΔS (-4.20〜-5.27 kJ mol-1) indicates that microscopic randomness is preferred in the [P₂₂₂₅][NTf₂]IL extraction system over [P₂₂₂₁₂][NTf₂]IL. The total negative alternation in Gibbs energy (-5.19〜-6.55 kJ mol-1) for the extraction reaction would thus be relatively influenced by the TΔS value on the number of carbon atoms in the alkyl side length, even if the efficiency of ΔH is significantly influenced by the total negative alternations in Gibbs energy. Electrochemical analysis revealed that extracted Au(III) can be reduced in two steps: (i) Au(III)/Au(I) and (ii) Au(I)/Au(0). The diffusion coefficients of the extracted Au(III) species in [P₂₂₂ₓ][NTf₂] (X = 5, 8, and 12) were evaluated from 323 to 373 K using semi-integral and semi-differential analyses. Because of the viscosity of the IL medium, the diffusion coefficient of the extracted Au(III) increases with increasing alkyl chain length. The 4f7/2 spectrum based on X-ray photoelectron spectroscopy revealed that the Au electrodeposits obtained after 10 cycles of continuous extraction and electrodeposition were in the metallic state.

Keywords: au(III), electrodeposition, phosphonium-based ionic liquids, solvent extraction

Procedia PDF Downloads 107
428 Subjective Probability and the Intertemporal Dimension of Probability to Correct the Misrelation Between Risk and Return of a Financial Asset as Perceived by Investors. Extension of Prospect Theory to Better Describe Risk Aversion

Authors: Roberta Martino, Viviana Ventre

Abstract:

From a theoretical point of view, the relationship between the risk associated with an investment and the expected value are directly proportional, in the sense that the market allows a greater result to those who are willing to take a greater risk. However, empirical evidence proves that this relationship is distorted in the minds of investors and is perceived exactly the opposite. To deepen and understand the discrepancy between the actual actions of the investor and the theoretical predictions, this paper analyzes the essential parameters used for the valuation of financial assets with greater attention to two elements: probability and the passage of time. Although these may seem at first glance to be two distinct elements, they are closely related. In particular, the error in the theoretical description of the relationship between risk and return lies in the failure to consider the impatience that is generated in the decision-maker when events that have not yet happened occur in the decision-making context. In this context, probability loses its objective meaning and in relation to the psychological aspects of the investor, it can only be understood as the degree of confidence that the investor has in the occurrence or non-occurrence of an event. Moreover, the concept of objective probability does not consider the inter-temporality that characterizes financial activities and does not consider the condition of limited cognitive capacity of the decision maker. Cognitive psychology has made it possible to understand that the mind acts with a compromise between quality and effort when faced with very complex choices. To evaluate an event that has not yet happened, it is necessary to imagine that it happens in your head. This projection into the future requires a cognitive effort and is what differentiates choices under conditions of risk and choices under conditions of uncertainty. In fact, since the receipt of the outcome in choices under risk conditions is imminent, the mechanism of self-projection into the future is not necessary to imagine the consequence of the choice and the decision makers dwell on the objective analysis of possibilities. Financial activities, on the other hand, develop over time and the objective probability is too static to consider the anticipatory emotions that the self-projection mechanism generates in the investor. Assuming that uncertainty is inherent in valuations of events that have not yet occurred, the focus must shift from risk management to uncertainty management. Only in this way the intertemporal dimension of the decision-making environment and the haste generated by the financial market can be cautioned and considered. The work considers an extension of the prospectus theory with the temporal component with the aim of providing a description of the attitude towards risk with respect to the passage of time.

Keywords: impatience, risk aversion, subjective probability, uncertainty

Procedia PDF Downloads 107
427 Comparison Approach for Wind Resource Assessment to Determine Most Precise Approach

Authors: Tasir Khan, Ishfaq Ahmad, Yejuan Wang, Muhammad Salam

Abstract:

Distribution models of the wind speed data are essential to assess the potential wind speed energy because it decreases the uncertainty to estimate wind energy output. Therefore, before performing a detailed potential energy analysis, the precise distribution model for data relating to wind speed must be found. In this research, material from numerous criteria goodness-of-fits, such as Kolmogorov Simonov, Anderson Darling statistics, Chi-Square, root mean square error (RMSE), AIC and BIC were combined finally to determine the wind speed of the best-fitted distribution. The suggested method collectively makes each criterion. This method was useful in a circumstance to fitting 14 distribution models statistically with the data of wind speed together at four sites in Pakistan. The consequences show that this method provides the best source for selecting the most suitable wind speed statistical distribution. Also, the graphical representation is consistent with the analytical results. This research presents three estimation methods that can be used to calculate the different distributions used to estimate the wind. In the suggested MLM, MOM, and MLE the third-order moment used in the wind energy formula is a key function because it makes an important contribution to the precise estimate of wind energy. In order to prove the presence of the suggested MOM, it was compared with well-known estimation methods, such as the method of linear moment, and maximum likelihood estimate. In the relative analysis, given to several goodness-of-fit, the presentation of the considered techniques is estimated on the actual wind speed evaluated in different time periods. The results obtained show that MOM certainly provides a more precise estimation than other familiar approaches in terms of estimating wind energy based on the fourteen distributions. Therefore, MOM can be used as a better technique for assessing wind energy.

Keywords: wind-speed modeling, goodness of fit, maximum likelihood method, linear moment

Procedia PDF Downloads 84
426 The Predictive Utility of Subjective Cognitive Decline Using Item Level Data from the Everyday Cognition (ECog) Scales

Authors: J. Fox, J. Randhawa, M. Chan, L. Campbell, A. Weakely, D. J. Harvey, S. Tomaszewski Farias

Abstract:

Early identification of individuals at risk for conversion to dementia provides an opportunity for preventative treatment. Many older adults (30-60%) report specific subjective cognitive decline (SCD); however, previous research is inconsistent in terms of what types of complaints predict future cognitive decline. The purpose of this study is to identify which specific complaints from the Everyday Cognition Scales (ECog) scales, a measure of self-reported concerns for everyday abilities across six cognitive domains, are associated with: 1) conversion from a clinical diagnosis of normal to either MCI or dementia (categorical variable) and 2) progressive cognitive decline in memory and executive function (continuous variables). 415 cognitively normal older adults were monitored annually for an average of 5 years. Cox proportional hazards models were used to assess associations between self-reported ECog items and progression to impairment (MCI or dementia). A total of 114 individuals progressed to impairment; the mean time to progression was 4.9 years (SD=3.4 years, range=0.8-13.8). Follow-up models were run controlling for depression. A subset of individuals (n=352) underwent repeat cognitive assessments for an average of 5.3 years. For those individuals, mixed effects models with random intercepts and slopes were used to assess associations between ECog items and change in neuropsychological measures of episodic memory or executive function. Prior to controlling for depression, subjective concerns on five of the eight Everyday Memory items, three of the nine Everyday Language items, one of the seven Everyday Visuospatial items, two of the five Everyday Planning items, and one of the six Everyday Organization items were associated with subsequent diagnostic conversion (HR=1.25 to 1.59, p=0.003 to 0.03). However, after controlling for depression, only two specific complaints of remembering appointments, meetings, and engagements and understanding spoken directions and instructions were associated with subsequent diagnostic conversion. Episodic memory in individuals reporting no concern on ECog items did not significantly change over time (p>0.4). More complaints on seven of the eight Everyday Memory items, three of the nine Everyday Language items, and three of the seven Everyday Visuospatial items were associated with a decline in episodic memory (Interaction estimate=-0.055 to 0.001, p=0.003 to 0.04). Executive function in those reporting no concern on ECog items declined slightly (p <0.001 to 0.06). More complaints on three of the eight Everyday Memory items and three of the nine Everyday Language items were associated with a decline in executive function (Interaction estimate=-0.021 to -0.012, p=0.002 to 0.04). These findings suggest that specific complaints across several cognitive domains are associated with diagnostic conversion. Specific complaints in the domains of Everyday Memory and Language are associated with a decline in both episodic memory and executive function. Increased monitoring and treatment of individuals with these specific SCD may be warranted.

Keywords: alzheimer’s disease, dementia, memory complaints, mild cognitive impairment, risk factors, subjective cognitive decline

Procedia PDF Downloads 80
425 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.

Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements

Procedia PDF Downloads 66
424 Influence of Non-Formal Physical Education Curriculum, Based on Olympic Pedagogy, for 11-13 Years Old Children Physical Development

Authors: Asta Sarkauskiene

Abstract:

The pedagogy of Olympic education is based upon the main idea of P. de Coubertin, that physical education can and has to support the education of the perfect person, the one who was an aspiration in archaic Greece, when it was looking towards human as a one whole, which is composed of three interconnected functions: physical, psychical and spiritual. The following research question was formulated in the present study: What curriculum of non-formal physical education in school can positively influence physical development of 11-13 years old children? The aim of this study was to formulate and implement curriculum of non-formal physical education, based on Olympic pedagogy, and assess its effectiveness for physical development of 11-13 years old children. The research was conducted in two stages. In the first stage 51 fifth grade children (Mage = 11.3 years) participated in a quasi-experiment for two years. Children were organized into 2 groups: E and C. Both groups shared the duration (1 hour) and frequency (twice a week) but were different in their education curriculum. Experimental group (E) worked under the program developed by us. Priorities of the E group were: training of physical powers in unity with psychical and spiritual powers; integral growth of physical development, physical activity, physical health, and physical fitness; integration of children with lower health and physical fitness level; content that corresponds children needs, abilities, physical and functional powers. Control group (C) worked according to NFPE programs prepared by teachers and approved by school principal and school methodical group. Priorities of the C group were: motion actions teaching and development; physical qualities training; training of the most physically capable children. In the second stage (after four years) 72 sixth graders (Mage = 13.00) attended in the research from the same comprehensive schools. Children were organized into first and second groups. The curriculum of the first group was modified and the second - the same as group C. The focus groups conducted anthropometric (height, weight, BMI) and physiometric (VC, right and left handgrip strength) measurements. Dependent t test indicated that over two years E and C group girls and boys height, weight, right and left handgrip strength indices increased significantly, p < 0.05. E group girls and boys BMI indices did not change significantly, p > 0.05, i.e. height and weight ratio of girls, who participated in NFPE in school, became more proportional. C group girls VC indices did not differ significantly, p > 0.05. Independent t test indicated that in the first and second research stage differences of anthropometric and physiometric measurements of the groups are not significant, p > 0.05. Formulated and implemented curriculum of non-formal education in school, based on olympic pedagogy, had the biggest positive influence on decreasing 11-13 years old children level of BMI and increasing level of VC.

Keywords: non – formal physical education, olympic pedagogy, physical development, health sciences

Procedia PDF Downloads 563