Search results for: job selection
1761 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics
Authors: Michael Lousis
Abstract:
This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors
Procedia PDF Downloads 1901760 Using Neural Networks for Click Prediction of Sponsored Search
Authors: Afroze Ibrahim Baqapuri, Ilya Trofimov
Abstract:
Sponsored search is a multi-billion dollar industry and makes up a major source of revenue for search engines (SE). Click-through-rate (CTR) estimation plays a crucial role for ads selection, and greatly affects the SE revenue, advertiser traffic and user experience. We propose a novel architecture of solving CTR prediction problem by combining artificial neural networks (ANN) with decision trees. First, we compare ANN with respect to other popular machine learning models being used for this task. Then we go on to combine ANN with MatrixNet (proprietary implementation of boosted trees) and evaluate the performance of the system as a whole. The results show that our approach provides a significant improvement over existing models.Keywords: neural networks, sponsored search, web advertisement, click prediction, click-through rate
Procedia PDF Downloads 5721759 Removing Maturational Influences from Female Youth Swimming: The Application of Corrective Adjustment Procedures
Authors: Clorinda Hogan, Shaun Abbott, Mark Halaki, Marcela Torres Catiglioni, Goshi Yamauchi, Lachlan Mitchell, James Salter, Michael Romann, Stephen Cobley
Abstract:
Introduction: Common annual age-group competition structures unintentionally introduce participation inequalities, performance (dis)advantages and selection biases due to the effect of maturational variation between youth swimmers. On this basis, there are implications for improving performance evaluation strategies. Therefore the aim was to: (1) To determine maturity timing distributions in female youth swimming; (2) quantify the relationship between maturation status and 100-m FC performance; (3) apply Maturational-based Corrective Adjustment Procedures (Mat-CAPs) for removal of maturational status performance influences. Methods: (1) Cross-sectional analysis of 663 female (10-15 years) swimmers who underwent assessment of anthropometrics (mass, height and sitting height) and estimations of maturity timing and offset. (2) 100-m front-crawl performance (seconds) was assessed at Australian regional, state, and national-level competitions between 2016-2020. To determine the relationship between maturation status and 100-m front-crawl performance, MO was plotted against 100-m FC performance time. The expected maturity status - performance relationship for females aged 10-15 years of age was obtained through a quadratic function (y = ax2 + bx + c) from unstandardized coefficients. The regression equation was subsequently used for Mat-CAPs. (3) Participants aged 10-13 years were categorised into maturity-offset categories. Maturity offset distributions for Raw (‘All’, ‘Top 50%’ & ‘Top 25%’) and Correctively Adjusted swim times were examined. Chi-square, Cramer’s V and ORs determined the occurrence of maturation biases for each age group and selection level. Results—: (1) Maturity timing distributions illustrated overrepresentation of ‘normative’ maturing swimmers (11.82 ± 0.40 years), with a descriptive shift toward the early maturing relative to the normative population. (2) A curvilinear relationship between maturity-offset and swim performance was identified (R2 = 0.53, P < 0.001) and subsequently utilised for Mat-CAPs. (3) Raw maturity offset categories identified partial maturation status skewing towards biologically older swimmers at 10/11 and 12 years, with effect magnitudes increasing in the ‘Top 50%’ and ‘25%’ of performance times. Following Mat-CAPs application, maturity offset biases were removed in similar age groups and selection levels. When adjusting performance times for maturity offset, Mat-CAPs was successful in mitigating against maturational biases until approximately 1-year post Peak Height Velocity. The overrepresentation of ‘normative’ maturing female swimmers contrasted with the substantial overrepresentation of ‘early’ maturing male swimmers found previously in 100-m front-crawl. These findings suggest early maturational timing is not advantageous in females, but findings associated with Aim 2, highlight how advanced maturational status remained beneficial to performance. Observed differences between female and male maturational biases may relate to the differential impact of physiological development during pubertal years. Females experience greater increases of fat mass and potentially differing changes in body shape which can negatively affect swim performance. Conclusions: Transient maturation status-based participation and performance advantages were apparent within a large sample of Australian female youth 100-m FC swimmers. By removing maturity status performance biases within female youth swimming, Mat-CAPs could help improve participation experiences and the accuracy of identifying genuinely skilled female youth swimmers.Keywords: athlete development, long-term sport participation, performance evaluation, talent identification, youth competition
Procedia PDF Downloads 1821758 A Combined AHP-GP Model for Selecting Knowledge Management Tool
Authors: Ahmad Sarfaraz, Raiyad Herwies
Abstract:
In this paper, a multi-criteria decision making analysis is used to help any organization selects the best KM tool that fits and serves its needs. The AHP model is used based on a previous study to highlight and identify the main criteria and sub-criteria that are incorporated in the selection process. Different KM tools alternatives with different criteria are compared and weighted accurately to be incorporated in the GP model. The main goal is to combine the GP model with the AHP model to ensure that selecting the KM tool considers the resource constraints. Two important issues are discussed in this paper: how different factors could be taken into consideration in forming the AHP model, and how to incorporate the AHP results into the GP model for better results.Keywords: knowledge management, analytical hierarchy process, goal programming, multi-criteria decision making
Procedia PDF Downloads 3851757 Evolutionary Genomic Analysis of Adaptation Genomics
Authors: Agostinho Antunes
Abstract:
The completion of the human genome sequencing in 2003 opened a new perspective into the importance of whole genome sequencing projects, and currently multiple species are having their genomes completed sequenced, from simple organisms, such as bacteria, to more complex taxa, such as mammals. This voluminous sequencing data generated across multiple organisms provides also the framework to better understand the genetic makeup of such species and related ones, allowing to explore the genetic changes underlining the evolution of diverse phenotypic traits. Here, recent results from our group retrieved from comparative evolutionary genomic analyses of varied species will be considered to exemplify how gene novelty and gene enhancement by positive selection might have been determinant in the success of adaptive radiations into diverse habitats and lifestyles.Keywords: adaptation, animals, evolution, genomics
Procedia PDF Downloads 4291756 The Human Resource Management Systems and Practices of Multinational Companies in Their Nigerian Subsidiaries
Authors: Suwaiba Sabiu Bako, Yaw Debrah
Abstract:
In spite of the extensive literature available on the human resource management (HRM) systems and practices of multinational companies (MNCs) from developed countries, there are gaps concerning emerging countries’ multinational companies’ (EMNCs) HRM systems and practices. This study examines the transfer of HRM practices in Nigerian subsidiaries of MNCs from South Africa. It reveals that South MNCs hybridise their recruitment and selection processes and localise their compensation and employee relations. It also proves that performance appraisal, talent management and code of conduct practices are largely transferred to subsidiaries with minimal adaptation.Keywords: EMNCs, HRM practices, HRM systems, Nigeria, South Africa
Procedia PDF Downloads 1131755 The Principle Probabilities of Space-Distance Resolution for a Monostatic Radar and Realization in Cylindrical Array
Authors: Anatoly D. Pluzhnikov, Elena N. Pribludova, Alexander G. Ryndyk
Abstract:
In conjunction with the problem of the target selection on a clutter background, the analysis of the scanning rate influence on the spatial-temporal signal structure, the generalized multivariate correlation function and the quality of the resolution with the increase pulse repetition frequency is made. The possibility of the object space-distance resolution, which is conditioned by the range-to-angle conversion with an increased scanning rate, is substantiated. The calculations for the real cylindrical array at high scanning rate are presented. The high scanning rate let to get the signal to noise improvement of the order of 10 dB for the space-time signal processing.Keywords: antenna pattern, array, signal processing, spatial resolution
Procedia PDF Downloads 1801754 Rheological Evaluation of Various Indigenous Gums
Authors: Yogita Weikey, Shobha Lata Sinha, Satish Kumar Dewangan
Abstract:
In the present investigation, rheology of the three different natural gums has been evaluated experimentally using MCR 102 rheometer. Various samples based on the variation of the concentration of the solid gum powder have been prepared. Their non-Newtonian behavior has been observed by the consistency plots and viscosity variation plots with respect to different solid concentration. The viscosity-shear rate curves of gums are similar and the behavior is shear thinning. Gums are showing pseudoplastic behavior. The value of k and n are calculated by using various models. Results show that the Herschel–Bulkley rheological model is reliable to describe the relationship of shear stress as a function of shear rate. R² values are also calculated to support the choice of gum selection.Keywords: bentonite, Indian gum, non-Newtonian model, rheology
Procedia PDF Downloads 3101753 Challenges for Competency-Based Learning Design in Primary School Mathematics in Mozambique
Authors: Satoshi Kusaka
Abstract:
The term ‘competency’ is attracting considerable scholarly attention worldwide with the advance of globalization in the 21st century and with the arrival of a knowledge-based society. In the current world environment, familiarity with varied disciplines is regarded to be vital for personal success. The idea of a competency-based educational system was mooted by the ‘Definition and Selection of Competencies (DeSeCo)’ project that was conducted by the Organization for Economic Cooperation and Development (OECD). Further, attention to this topic is not limited to developed countries; it can also be observed in developing countries. For instance, the importance of a competency-based curriculum was mentioned in the ‘2013 Harmonized Curriculum Framework for the East African Community’, which recommends key competencies that should be developed in primary schools. The introduction of such curricula and the reviews of programs are actively being executed, primarily in the East African Community but also in neighboring nations. Taking Mozambique as a case in point, the present paper examines the conception of ‘competency’ as a target of frontline education in developing countries. It also aims to discover the manner in which the syllabus, textbooks and lessons, among other things, in primary-level math education are developed and to determine the challenges faced in the process. This study employs the perspective of competency-based education design to analyze how the term ‘competency’ is defined in the primary-level math syllabus, how it is reflected in the textbooks, and how the lessons are actually developed. ‘Practical competency’ is mentioned in the syllabus, and the description of the term lays emphasis on learners' ability to interactively apply socio-cultural and technical tools, which is one of the key competencies that are advocated in OECD's ‘Definition and Selection of Competencies’ project. However, most of the content of the textbooks pertains to ‘basic academic ability’, and in actual classroom practice, teachers often impart lessons straight from the textbooks. It is clear that the aptitude of teachers and their classroom routines are greatly dependent on the cultivation of their own ‘practical competency’ as it is defined in the syllabus. In other words, there is great divergence between the ‘syllabus’, which is the intended curriculum, and the content of the ‘textbooks’. In fact, the material in the textbooks should serve as the bridge between the syllabus, which forms the guideline, and the lessons, which represent the ‘implemented curriculum’. Moreover, the results obtained from this investigation reveal that the problem can only be resolved through the cultivation of ‘practical competency’ in teachers, which is currently not sufficient.Keywords: competency, curriculum, mathematics education, Mozambique
Procedia PDF Downloads 1941752 Recovery of Acetonitrile from Aqueous Solutions by Extractive Distillation: The Effect of Entrainer
Authors: Aleksandra Y. Sazonova, Valentina M. Raeva
Abstract:
The aim of this work was to apply extractive distillation for acetonitrile removal from water solutions, to validate thermodynamic criterion based on excess Gibbs energy to entrainer selection process for acetonitrile – water mixture separation and show its potential efficiency at isothermal conditions as well as at isobaric (conditions of real distillation process), to simulate and analyze an extractive distillation process with chosen entrainers: optimize amount of trays and feeds, entrainer/original mixture and reflux ratios. Equimolar composition of the feed stream was chosen for the process, comparison of the energy consumptions was carried out. Glycerol was suggested as the most energetically and ecologically suitable entrainer.Keywords: acetonitrile, entrainer, extractive distillation, water
Procedia PDF Downloads 2671751 Freshwater Source of Sapropel for Healthcare
Authors: Ilona Pavlovska, Aneka Klavina, Agris Auce, Ivars Vanadzins, Alise Silova, Laura Komarovska, Linda Paegle, Baiba Silamikele, Linda Dobkevica
Abstract:
Freshwater sapropel is a common material formed by complex biological transformations of Holocene sediments in the water basement of the lakes in Latvia that has the potential to be used as medical mud. Sapropel forms over a long period in shallow waters by slowly decomposing organic sediment and has different compositions depending on the location of the source, surroundings, the water regime, etc. Official geological survey of Latvia lakes, from Latvian lake database (ezeri.lv), used in the selection of the area of the exploration. The multifunctional effect of sapropel on the whole organism explained by its complex chemical and biological structure. This unique, organic substance and its ability to maintain heat for a long time ensures deep tissue warming and has a positive effect on the treatment of various joint and skin diseases. Sapropel is a valuable resource with multiple areas of application. Investigation of sapropel sediments and survey of the five sites selected according to the criteria performed in the current study. Also, our study includes sampling at different depths and their initial treatment, evaluation of external signs, and study of physical-chemical parameters, as well as analysis of biochemical parameters and evaluation of microbiological indicators. The main selection criteria were sapropel deposits depth, hydrological regime, the history of agriculture next to the lake, and the potential exposure to industrial waste. One hundred and five sapropel samples obtained from five lakes (Audzelu, Dunakla, Ivusku, Zielu, and Mazars Kivdalova) during the wintertime. The main goal of the study is to carry out detailed and systematic research on the medical properties of sapropel to be obtained in Latvia, to promote its scientifically based use in balneology, to develop new medical procedures and services, and to promote the development of new exportable products. Latvian freshwater sapropel could be used as raw material for getting sapropel extract and use it as a remedy. All mentioned above brings us to the main question for sapropel usage in medicine, balneology, and pharmacy “how to develop quality criteria for raw sapropel and its extracts. The research was co-financed by the project "Analysis of characteristics of medical sapropel and its usage for medical purposes and elaboration of industrial extraction methods" No.1.1.1.1/16/A/165.Keywords: balneology, extracts, freshwater sapropel, Latvian lakes, medical mud, sapropel
Procedia PDF Downloads 2651750 Mining Educational Data to Support Students’ Major Selection
Authors: Kunyanuth Kularbphettong, Cholticha Tongsiri
Abstract:
This paper aims to create the model for student in choosing an emphasized track of student majoring in computer science at Suan Sunandha Rajabhat University. The objective of this research is to develop the suggested system using data mining technique to analyze knowledge and conduct decision rules. Such relationships can be used to demonstrate the reasonableness of student choosing a track as well as to support his/her decision and the system is verified by experts in the field. The sampling is from student of computer science based on the system and the questionnaire to see the satisfaction. The system result is found to be satisfactory by both experts and student as well.Keywords: data mining technique, the decision support system, knowledge and decision rules, education
Procedia PDF Downloads 4231749 Modifying Assessment Modes in the Science Classroom as a Solution to Examination Malpractice
Authors: Catherine Omole
Abstract:
Examination malpractice includes acts that temper with collecting accurate results during the conduct of an examination, thereby giving undue advantage to a student over his colleagues. Even though examination malpractice has been a lingering problem, examinations may not be easy to do away with completely as it is an important feedback tool in the learning process with several other functions e.g for the purpose of selection, placement, certification and promotion. Examination malpractice has created a lot of problems such as a relying on a weak work force based on false assessment results. The question is why is this problem still persisting, despite measures that have been taken to curb this ugly trend over the years? This opinion paper has identified modifications that could help relieve the student of the examination stress and thus increase the student’s effort towards effective learning and discourage examination malpractice in the long run.Keywords: assessment, examination malpractice, learning, science classroom
Procedia PDF Downloads 2601748 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database
Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani
Abstract:
The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.Keywords: residual analysis, GMPE, western balkan, strong motion, openquake
Procedia PDF Downloads 881747 Genetic Algorithm for Solving the Flexible Job-Shop Scheduling Problem
Authors: Guilherme Baldo Carlos
Abstract:
The flexible job-shop scheduling problem (FJSP) is an NP-hard combinatorial optimization problem, which can be applied to model several applications in a wide array of industries. This problem will have its importance increase due to the shift in the production mode that modern society is going through. The demands are increasing and for products personalized and customized. This work aims to apply a meta-heuristic called a genetic algorithm (GA) to solve this problem. A GA is a meta-heuristic inspired by the natural selection of Charles Darwin; it produces a population of individuals (solutions) and selects, mutates, and mates the individuals through generations in order to find a good solution for the problem. The results found indicate that the GA is suitable for FJSP solving.Keywords: genetic algorithm, evolutionary algorithm, scheduling, flexible job-shop scheduling
Procedia PDF Downloads 1471746 Text Data Preprocessing Library: Bilingual Approach
Authors: Kabil Boukhari
Abstract:
In the context of information retrieval, the selection of the most relevant words is a very important step. In fact, the text cleaning allows keeping only the most representative words for a better use. In this paper, we propose a library for the purpose text preprocessing within an implemented application to facilitate this task. This study has two purposes. The first, is to present the related work of the various steps involved in text preprocessing, presenting the segmentation, stemming and lemmatization algorithms that could be efficient in the rest of study. The second, is to implement a developed tool for text preprocessing in French and English. This library accepts unstructured text as input and provides the preprocessed text as output, based on a set of rules and on a base of stop words for both languages. The proposed library has been made on different corpora and gave an interesting result.Keywords: text preprocessing, segmentation, knowledge extraction, normalization, text generation, information retrieval
Procedia PDF Downloads 941745 Assessment of Psychomotor Development of Preschool Children: A Review of Eight Psychomotor Developmental Tools
Authors: Viola Hubačová Pirová
Abstract:
The assessment of psychomotor development allows us to identify children with motor delays, helps us to monitor progress in time and prepare suitable intervention programs. The foundation of psychomotor development lies in pre-school age and is crucial for child´s further cognitive and social development. Many assessment tools of psychomotor development have been developed over the years. Some of them are easy screening tools; others are more complex and sophisticated. The purpose of this review is to describe the history of psychomotor assessment, specify preschool children´s psychomotor evaluation and review eight psychomotor development assessment tools for preschool children (Denver II., DEMOST-PRE, TGMD -2/3, BOT-2, MABC-2, PDMS-2, KTK, MOT 4-6). The selection of test depends on purpose and context in which is the assessment planned.Keywords: assessment of psychomotor development, preschool children, psychomotor development, review of assessment tools
Procedia PDF Downloads 1671744 Quality Assurance in Software Design Patterns
Authors: Rabbia Tariq, Hannan Sajjad, Mehreen Sirshar
Abstract:
Design patterns are widely used to make the process of development easier as they greatly help the developers to develop the software. Different design patterns have been introduced till now but the behavior of same design pattern may differ in different domains that can lead to the wrong selection of the design pattern. The paper aims to discover the design patterns that suits best with respect to their domain thereby helping the developers to choose an effective design pattern. It presents the comprehensive analysis of design patterns based on different methodologies that include simulation, case study and comparison of various algorithms. Due to the difference of the domain the methodology used in one domain may be inapplicable to the other domain. The paper draws a conclusion based on strength and limitation of each design pattern in their respective domain.Keywords: design patterns, evaluation, quality assurance, software domains
Procedia PDF Downloads 5211743 Fuzzy Rules Based Improved BEENISH Protocol for Wireless Sensor Networks
Authors: Rishabh Sharma
Abstract:
The main design parameter of WSN (wireless sensor network) is the energy consumption. To compensate this parameter, hierarchical clustering is a technique that assists in extending duration of the networks life by efficiently consuming the energy. This paper focuses on dealing with the WSNs and the FIS (fuzzy interface system) which are deployed to enhance the BEENISH protocol. The node energy, mobility, pause time and density are considered for the selection of CH (cluster head). The simulation outcomes exhibited that the projected system outperforms the traditional system with regard to the energy utilization and number of packets transmitted to sink.Keywords: wireless sensor network, sink, sensor node, routing protocol, fuzzy rule, fuzzy inference system
Procedia PDF Downloads 1051742 Performance Analysis of ERA Using Fuzzy Logic in Wireless Sensor Network
Authors: Kamalpreet Kaur, Harjit Pal Singh, Vikas Khullar
Abstract:
In Wireless Sensor Network (WSN), the main limitation is generally inimitable energy consumption during processing of the sensor nodes. Cluster head (CH) election is one of the main issues that can reduce the energy consumption. Therefore, discovering energy saving routing protocol is the focused area for research. In this paper, fuzzy-based energy aware routing protocol is presented, which enhances the stability and network lifetime of the network. Fuzzy logic ensures the well-organized selection of CH by taking four linguistic variables that are concentration, energy, centrality, and distance to base station (BS). The results show that the proposed protocol shows better results in requisites of stability and throughput of the network.Keywords: ERA, fuzzy logic, network model, WSN
Procedia PDF Downloads 2791741 Investigation on Correlation of Earthquake Intensity Parameters with Seismic Response of Reinforced Concrete Structures
Authors: Semra Sirin Kiris
Abstract:
Nonlinear dynamic analysis is permitted to be used for structures without any restrictions. The important issue is the selection of the design earthquake to conduct the analyses since quite different response may be obtained using ground motion records at the same general area even resulting from the same earthquake. In seismic design codes, the method requires scaling earthquake records based on site response spectrum to a specified hazard level. Many researches have indicated that this limitation about selection can cause a large scatter in response and other charecteristics of ground motion obtained in different manner may demonstrate better correlation with peak seismic response. For this reason influence of eleven different ground motion parameters on the peak displacement of reinforced concrete systems is examined in this paper. From conducting 7020 nonlinear time history analyses for single degree of freedom systems, the most effective earthquake parameters are given for the range of the initial periods and strength ratios of the structures. In this study, a hysteresis model for reinforced concrete called Q-hyst is used not taken into account strength and stiffness degradation. The post-yielding to elastic stiffness ratio is considered as 0.15. The range of initial period, T is from 0.1s to 0.9s with 0.1s time interval and three different strength ratios for structures are used. The magnitude of 260 earthquake records selected is higher than earthquake magnitude, M=6. The earthquake parameters related to the energy content, duration or peak values of ground motion records are PGA(Peak Ground Acceleration), PGV (Peak Ground Velocity), PGD (Peak Ground Displacement), MIV (Maximum Increamental Velocity), EPA(Effective Peak Acceleration), EPV (Effective Peak Velocity), teff (Effective Duration), A95 (Arias Intensity-based Parameter), SPGA (Significant Peak Ground Acceleration), ID (Damage Factor) and Sa (Spectral Response Spectrum).Observing the correlation coefficients between the ground motion parameters and the peak displacement of structures, different earthquake parameters play role in peak displacement demand related to the ranges formed by the different periods and the strength ratio of a reinforced concrete systems. The influence of the Sa tends to decrease for the high values of strength ratio and T=0.3s-0.6s. The ID and PGD is not evaluated as a measure of earthquake effect since high correlation with displacement demand is not observed. The influence of the A95 is high for T=0.1 but low related to the higher values of T and strength ratio. The correlation of PGA, EPA and SPGA shows the highest correlation for T=0.1s but their effectiveness decreases with high T. Considering all range of structural parameters, the MIV is the most effective parameter.Keywords: earthquake parameters, earthquake resistant design, nonlinear analysis, reinforced concrete
Procedia PDF Downloads 1511740 Genomics of Aquatic Adaptation
Authors: Agostinho Antunes
Abstract:
The completion of the human genome sequencing in 2003 opened a new perspective into the importance of whole genome sequencing projects, and currently multiple species are having their genomes completed sequenced, from simple organisms, such as bacteria, to more complex taxa, such as mammals. This voluminous sequencing data generated across multiple organisms provides also the framework to better understand the genetic makeup of such species and related ones, allowing to explore the genetic changes underlining the evolution of diverse phenotypic traits. Here, recent results from our group retrieved from comparative evolutionary genomic analyses of selected marine animal species will be considered to exemplify how gene novelty and gene enhancement by positive selection might have been determinant in the success of adaptive radiations into diverse habitats and lifestyles.Keywords: comparative genomics, adaptive evolution, bioinformatics, phylogenetics, genome mining
Procedia PDF Downloads 5331739 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 1081738 Fault Detection and Isolation in Attitude Control Subsystem of Spacecraft Formation Flying Using Extended Kalman Filters
Authors: S. Ghasemi, K. Khorasani
Abstract:
In this paper, the problem of fault detection and isolation in the attitude control subsystem of spacecraft formation flying is considered. In order to design the fault detection method, an extended Kalman filter is utilized which is a nonlinear stochastic state estimation method. Three fault detection architectures, namely, centralized, decentralized, and semi-decentralized are designed based on the extended Kalman filters. Moreover, the residual generation and threshold selection techniques are proposed for these architectures.Keywords: component, formation flight of satellites, extended Kalman filter, fault detection and isolation, actuator fault
Procedia PDF Downloads 4341737 Effect of Inductance Ratio on Operating Frequencies of a Hybrid Resonant Inverter
Authors: Mojtaba Ghodsi, Hamidreza Ziaifar, Morteza Mohammadzaheri, Payam Soltani
Abstract:
In this paper, the performance of a medium power (25 kW/25 kHz) hybrid inverter with a reactive transformer is investigated. To analyze the sensitivity of the inverster, the RSM technique is employed to manifest the effective factors in the inverter to minimize current passing through the Insulated Bipolar Gate Transistors (IGBTs) (current stress). It is revealed that the ratio of the axillary inductor to the effective inductance of resonant inverter (N), is the most effective parameter to minimize the current stress in this type of inverter. In practice, proper selection of N mitigates the current stress over IGBTs by five times. This reduction is very helpful to keep the IGBTs at normal temperatures.Keywords: analytical analysis, hybrid resonant inverter, reactive transformer, response surface method
Procedia PDF Downloads 2071736 Performance Estimation of Small Scale Wind Turbine Rotor for Very Low Wind Regime Condition
Authors: Vilas Warudkar, Dinkar Janghel, Siraj Ahmed
Abstract:
Rapid development experienced by India requires huge amount of energy. Actual supply capacity additions have been consistently lower than the targets set by the government. According to World Bank 40% of residences are without electricity. In 12th five year plan 30 GW grid interactive renewable capacity is planned in which 17 GW is Wind, 10 GW is from solar and 2.1 GW from small hydro project, and rest is compensated by bio gas. Renewable energy (RE) and energy efficiency (EE) meet not only the environmental and energy security objectives, but also can play a crucial role in reducing chronic power shortages. In remote areas or areas with a weak grid, wind energy can be used for charging batteries or can be combined with a diesel engine to save fuel whenever wind is available. India according to IEC 61400-1 belongs to class IV Wind Condition; it is not possible to set up wind turbine in large scale at every place. So, the best choice is to go for small scale wind turbine at lower height which will have good annual energy production (AEP). Based on the wind characteristic available at MANIT Bhopal, rotor for small scale wind turbine is designed. Various Aero foil data is reviewed for selection of airfoil in the Blade Profile. Airfoil suited of Low wind conditions i.e. at low Reynold’s number is selected based on Coefficient of Lift, Drag and angle of attack. For designing of the rotor blade, standard Blade Element Momentum (BEM) Theory is implanted. Performance of the Blade is estimated using BEM theory in which axial induction factor and angular induction factor is optimized using iterative technique. Rotor performance is estimated for particular designed blade specifically for low wind Conditions. Power production of rotor is determined at different wind speeds for particular pitch angle of the blade. At pitch 15o and velocity 5 m/sec gives good cut in speed of 2 m/sec and power produced is around 350 Watts. Tip speed of the Blade is considered as 6.5 for which Coefficient of Performance of the rotor is calculated 0.35, which is good acceptable value for Small scale Wind turbine. Simple Load Model (SLM, IEC 61400-2) is also discussed to improve the structural strength of the rotor. In SLM, Edge wise Moment and Flap Wise moment is considered which cause bending stress at the root of the blade. Various Load case mentioned in the IEC 61400-2 is calculated and checked for the partial safety factor of the wind turbine blade.Keywords: annual energy production, Blade Element Momentum Theory, low wind Conditions, selection of airfoil
Procedia PDF Downloads 3371735 Analyzing the Effect of Ambient Temperature and Loads Power Factor on Electric Generator Power Rating
Authors: Ahmed Elsebaay, Maged A. Abu Adma, Mahmoud Ramadan
Abstract:
This study presents a technique clarifying the effect of ambient air temperature and loads power factor changing from standard values on electric generator power rating. The study introduces an optimized technique for selecting the correct electric generator power rating for certain application and operating site ambient temperature. The de-rating factors due to the previous effects will be calculated to be applied on a generator to select its power rating accurately to avoid unsafe operation and save its lifetime. The information in this paper provides a simple, accurate, and general method for synchronous generator selection and eliminates common errors.Keywords: ambient temperature, de-rating factor, electric generator, power factor
Procedia PDF Downloads 3581734 Exploring the Differences between Self-Harming and Suicidal Behaviour in Women with Complex Mental Health Needs
Authors: Sophie Oakes-Rogers, Di Bailey, Karen Slade
Abstract:
Female offenders are a uniquely vulnerable group, who are at high risk of suicide. Whilst the prevention of self-harm and suicide remains a key global priority, we need to better understand the relationship between these challenging behaviours that constitute a pressing problem, particularly in environments designed to prioritise safety and security. Method choice is unlikely to be random, and is instead influenced by a range of cultural, social, psychological and environmental factors, which change over time and between countries. A key aspect of self-harm and suicide in women receiving forensic care is the lack of free access to methods. At a time where self-harm and suicide rates continue to rise internationally, understanding the role of these influencing factors and the impact of current suicide prevention strategies on the use of near-lethal methods is crucial. This poster presentation will present findings from 25 interviews and 3 focus groups, which enlisted a Participatory Action Research approach to explore the differences between self-harming and suicidal behavior. A key element of this research was using the lived experiences of women receiving forensic care from one forensic pathway in the UK, and the staffs who care for them, to discuss the role of near-lethal self-harm (NLSH). The findings and suggestions from the lived accounts of the women and staff will inform a draft assessment tool, which better assesses the risk of suicide based on the lethality of methods. This tool will be the first of its kind, which specifically captures the needs of women receiving forensic services. Preliminary findings indicate women engage in NLSH for two key reasons and is determined by their history of self-harm. Women who have a history of superficial non-life threatening self-harm appear to engage in NLSH in response to a significant life event such as family bereavement or sentencing. For these women, suicide appears to be a realistic option to overcome their distress. This, however, differs from women who appear to have a lifetime history of NLSH, who engage in such behavior in a bid to overcome the grief and shame associated with historical abuse. NLSH in these women reflects a lifetime of suicidality and indicates they pose the greatest risk of completed suicide. Findings also indicate differences in method selection between forensic provisions. Restriction of means appears to play a role in method selection, and findings suggest it causes method substitution. Implications will be discussed relating to the screening of female forensic patients and improvements to the current suicide prevention strategies.Keywords: forensic mental health, method substitution, restriction of means, suicide
Procedia PDF Downloads 1781733 Orthogonal Regression for Nonparametric Estimation of Errors-In-Variables Models
Authors: Anastasiia Yu. Timofeeva
Abstract:
Two new algorithms for nonparametric estimation of errors-in-variables models are proposed. The first algorithm is based on penalized regression spline. The spline is represented as a piecewise-linear function and for each linear portion orthogonal regression is estimated. This algorithm is iterative. The second algorithm involves locally weighted regression estimation. When the independent variable is measured with error such estimation is a complex nonlinear optimization problem. The simulation results have shown the advantage of the second algorithm under the assumption that true smoothing parameters values are known. Nevertheless the use of some indexes of fit to smoothing parameters selection gives the similar results and has an oversmoothing effect.Keywords: grade point average, orthogonal regression, penalized regression spline, locally weighted regression
Procedia PDF Downloads 4161732 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning
Authors: Saahith M. S., Sivakami R.
Abstract:
In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis
Procedia PDF Downloads 38