Search results for: multiple input multiple output
5909 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation
Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony
Abstract:
Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.Keywords: architecture, computation, evolution, standard deviation, urban
Procedia PDF Downloads 1335908 Theoretical and ML-Driven Identification of a Mispriced Credit Risk
Authors: Yuri Katz, Kun Liu, Arunram Atmacharan
Abstract:
Due to illiquidity, mispricing on Credit Markets is inevitable. This creates huge challenges to banks and investors as they seek to find new ways of risk valuation and portfolio management in a post-credit crisis world. Here, we analyze the difference in behavior of the spread-to-maturity in investment and high-yield categories of US corporate bonds between 2014 and 2023. Deviation from the theoretical dependency of this measure in the universe under study allows to identify multiple cases of mispriced credit risk. Remarkably, we observe mispriced bonds in both categories of credit ratings. This identification is supported by the application of the state-of-the-art machine learning model in more than 90% of cases. Noticeably, the ML-driven model-based forecasting of a category of bond’s credit ratings demonstrate an excellent out-of-sample accuracy (AUC = 98%). We believe that these results can augment conventional valuations of credit portfolios.Keywords: credit risk, credit ratings, bond pricing, spread-to-maturity, machine learning
Procedia PDF Downloads 805907 Optimization of Electrical Discharge Machining Parameters in Machining AISI D3 Tool Steel by Grey Relational Analysis
Authors: Othman Mohamed Altheni, Abdurrahman Abusaada
Abstract:
This study presents optimization of multiple performance characteristics [material removal rate (MRR), surface roughness (Ra), and overcut (OC)] of hardened AISI D3 tool steel in electrical discharge machining (EDM) using Taguchi method and Grey relational analysis. Machining process parameters selected were pulsed current Ip, pulse-on time Ton, pulse-off time Toff and gap voltage Vg. Based on ANOVA, pulse current is found to be the most significant factor affecting EDM process. Optimized process parameters are simultaneously leading to a higher MRR, lower Ra, and lower OC are then verified through a confirmation experiment. Validation experiment shows an improved MRR, Ra and OC when Taguchi method and grey relational analysis were usedKeywords: edm parameters, grey relational analysis, Taguchi method, ANOVA
Procedia PDF Downloads 2945906 Class-Size and Instructional Materials as Correlates of Pupils Learning and Academic Achievement in Primary School
Authors: Aanuoluwapo Olusola Adesanya, Adesina Joseph
Abstract:
This paper examined the class-size and instructional materials as correlates of pupils learning and academic achievement in primary school. The population of the study comprised 198 primary school pupils in three selected schools in Ogun State, Nigeria. Data were collected through questionnaire and were analysed with the use of multiple regression and ANOVA to analysed the correlation between class-size, instructional materials (independent variables) and learning achievement (dependent variable). The findings revealed that schools having an average class-size of 30 and below with use of instructional materials obtained better results than schools having more than 30 and above. The main score were higher in the school in schools having 30 and below than schools with 30 and above. It was therefore recommended that government, stakeholders and NGOs should provide more classrooms and supply of adequate instructional materials in all primary schools in the state to cater for small class-size.Keywords: class-size, instructional materials, learning, academic achievement
Procedia PDF Downloads 3505905 Information Exchange Process Analysis between Authoring Design Tools and Lighting Simulation Tools
Authors: Rudan Xue, Annika Moscati, Rehel Zeleke Kebede, Peter Johansson
Abstract:
Successful buildings’ simulation and analysis inevitably require information exchange between multiple building information modeling (BIM) software. The BIM infor-mation exchange based on IFC is widely used. However, Industry Foundation Classifi-cation (IFC) files are not always reliable and information can get lost when using dif-ferent software for modeling and simulations. In this research, interviews with lighting simulation experts and a case study provided by a company producing lighting devices have been the research methods used to identify the necessary steps and data for suc-cessful information exchange between lighting simulation tools and authoring design tools. Model creation, information exchange, and model simulation have been identi-fied as key aspects for the success of information exchange. The paper concludes with recommendations for improved information exchange and more reliable simulations that take all the needed parameters into consideration.Keywords: BIM, data exchange, interoperability issues, lighting simulations
Procedia PDF Downloads 2415904 Scaling Strategy of a New Experimental Rig for Wheel-Rail Contact
Authors: Meysam Naeimi, Zili Li, Rolf Dollevoet
Abstract:
A new small–scale test rig developed for rolling contact fatigue (RCF) investigations in wheel–rail material. This paper presents the scaling strategy of the rig based on dimensional analysis and mechanical modelling. The new experimental rig is indeed a spinning frame structure with multiple wheel components over a fixed rail-track ring, capable of simulating continuous wheel-rail contact in a laboratory scale. This paper describes the dimensional design of the rig, to derive its overall scaling strategy and to determine the key elements’ specifications. Finite element (FE) modelling is used to simulate the mechanical behavior of the rig with two sample scale factors of 1/5 and 1/7. The results of FE models are compared with the actual railway system to observe the effectiveness of the chosen scales. The mechanical properties of the components and variables of the system are finally determined through the design process.Keywords: new test rig, rolling contact fatigue, rail, small scale
Procedia PDF Downloads 4855903 Determinants of Life Satisfaction in Canada: A Causal Modelling Approach
Authors: Rose Branch-Allen, John Jayachandran
Abstract:
Background and purpose: Canada is a pluralistic, multicultural society with an ethno-cultural composition that has been shaped over time by immigrants and their descendants. Although Canada welcomes these immigrants, many will endure hardship and assimilation difficulties. Despite these life hurdles, surveys consistently disclose high life satisfaction for all Canadians. Most research studies on Life Satisfaction/ Subjective Wellbeing (SWB) have focused on one main determinant and a variety of social demographic variables to delineate the determinants of life satisfaction. However, very few research studies examine life satisfaction from a holistic approach. In addition, we need to understand the causal pathways leading to life satisfaction, and develop theories that explain why certain variables differentially influence the different components of SWB. The aim this study was to utilize a holistic approach to construct a causal model and identify major determinants of life satisfaction. Data and measures: This study utilized data from the General Social Survey, with a sample size of 19, 597. The exogenous concepts included age, gender, marital status, household size, socioeconomic status, ethnicity, location, immigration status, religiosity, and neighborhood. The intervening concepts included health, social contact, leisure, enjoyment, work-family balance, quality time, domestic labor, and sense of belonging. The endogenous concept life satisfaction was measured by multiple indicators (Cronbach’s alpha = .83). Analysis: Several multiple regression models were run sequentially to estimate path coefficients for the causal model. Results: Overall, above average satisfaction with life was reported for respondents with specific socio-economic, demographic and lifestyle characteristics. With regard to exogenous factors, respondents who were female, younger, married, from high socioeconomic status background, born in Canada, very religious, and demonstrated high level of neighborhood interaction had greater satisfaction with life. Similarly, intervening concepts suggested respondents had greater life satisfaction if they had better health, more social contact, less time on passive leisure activities and more time on active leisure activities, more time with family and friends, more enjoyment with volunteer activities, less time on domestic labor and a greater sense of belonging to the community. Conclusions and Implications: Our results suggest that a holistic approach is necessary for establishing determinants of life satisfaction, and that life satisfaction is not merely comprised of positive or negative affect rather understanding the causal process of life satisfaction. Even though, most of our findings are consistent with previous studies, a significant number of causal connections contradict some of the findings in literature today. We have provided possible explanation for these anomalies researchers encounter in studying life satisfaction and policy implications.Keywords: causal model, holistic approach, life satisfaction, socio-demographic variables, subjective well-being
Procedia PDF Downloads 3575902 Time Parameter Based for the Detection of Catastrophic Faults in Analog Circuits
Authors: Arabi Abderrazak, Bourouba Nacerdine, Ayad Mouloud, Belaout Abdeslam
Abstract:
In this paper, a new test technique of analog circuits using time mode simulation is proposed for the single catastrophic faults detection in analog circuits. This test process is performed to overcome the problem of catastrophic faults being escaped in a DC mode test applied to the inverter amplifier in previous research works. The circuit under test is a second-order low pass filter constructed around this type of amplifier but performing a function that differs from that of the previous test. The test approach performed in this work is based on two key- elements where the first one concerns the unique square pulse signal selected as an input vector test signal to stimulate the fault effect at the circuit output response. The second element is the filter response conversion to a square pulses sequence obtained from an analog comparator. This signal conversion is achieved through a fixed reference threshold voltage of this comparison circuit. The measurement of the three first response signal pulses durations is regarded as fault effect detection parameter on one hand, and as a fault signature helping to hence fully establish an analog circuit fault diagnosis on another hand. The results obtained so far are very promising since the approach has lifted up the fault coverage ratio in both modes to over 90% and has revealed the harmful side of faults that has been masked in a DC mode test.Keywords: analog circuits, analog faults diagnosis, catastrophic faults, fault detection
Procedia PDF Downloads 4425901 Effect of Variable Fluxes on Optimal Flux Distribution in a Metabolic Network
Authors: Ehsan Motamedian
Abstract:
Finding all optimal flux distributions of a metabolic model is an important challenge in systems biology. In this paper, a new algorithm is introduced to identify all alternate optimal solutions of a large scale metabolic network. The algorithm reduces the model to decrease computations for finding optimal solutions. The algorithm was implemented on the Escherichia coli metabolic model to find all optimal solutions for lactate and acetate production. There were more optimal flux distributions when acetate production was optimized. The model was reduced from 1076 to 80 variable fluxes for lactate while it was reduced to 91 variable fluxes for acetate. These 11 more variable fluxes resulted in about three times more optimal flux distributions. Variable fluxes were from 12 various metabolic pathways and most of them belonged to nucleotide salvage and extra cellular transport pathways.Keywords: flux variability, metabolic network, mixed-integer linear programming, multiple optimal solutions
Procedia PDF Downloads 4345900 Delegation or Assignment: Registered Nurses’ Ambiguity in Interpreting Their Scope of Practice in Long Term Care Settings
Authors: D. Mulligan, D. Casey
Abstract:
Introductory Statement: Delegation is when a registered nurse (RN) transfers a task or activity that is normally within their scope of practice to another person (delegatee). RN delegation is common practice with unregistered staff, e.g., student nurses and health care assistants (HCAs). As the role of the HCA is increasingly embedded as a direct care and support role, especially in long-term residential care for older adults, there is RN uncertainty as to their role as a delegator. The assignment is when a task is transferred to a person that is within the role specification of the delegatee. RNs in long-term care (LTC) for older people are increasingly working in teams where there are less RNs and more HCAs providing direct care to the residents. The RN is responsible and accountable for their decision to delegate and assign tasks to HCAs. In an interpretive, multiple case studies to explore how delegation of tasks by RNs to HCAs occurred in long-term care settings in Ireland the importance of the RN understanding their scope of practice emerged. Methodology: Focus group interviews and individual interviews were undertaken as part of a multiple case study. Both cases, anonymized as Case A and Case B, were within the public health service in Ireland. The case study sites were long-term care settings for older adults located in different social care divisions, and in different geographical areas. Four focus group interviews with staff nurses and three individual interviews with CNMs were undertaken. The interactive data analysis approach was the analytical framework used, with within-case and cross-case analysis. The theoretical lens of organizational role theory, applying the role episode model (REM), was used to understand, interpret, and explain the findings. Study Findings: RNs and CNMs understood the role of the nurse regulator and the scope of practice. RNs understood that the RN was accountable for the care and support provided to residents. However, RNs and CNM2s could not describe delegation in the context of their scope of practice. In both cases, the RNs did not have a standardized process for assessing HCA competence to undertake nursing tasks or interventions. RNs did not routinely supervise HCAs. Tasks were assigned and not delegated. There were differences between the cases in relation to understanding which nursing tasks required delegation. HCAs in Case A undertook clinical vital sign assessments and documentation. HCAs in Case B did not routinely undertake these activities. Delegation and assignment were influenced by the organizational factors, e.g., model of care, absence of delegation policies, inadequate RN education on delegation, and a lack of RN and HCA role clarity. Concluding Statement: Nurse staffing levels and skill mix in long-term care settings continue to change with more HCAs providing more direct care and support. With decreasing RN staffing levels RNs will be required to delegate and assign more direct care to HCAs. There is a requirement to distinguish between RN assignment and delegation at policy, regulation, and organizational levels.Keywords: assignment, delegation, registered nurse, scope of practice
Procedia PDF Downloads 1535899 Message Passing Neural Network (MPNN) Approach to Multiphase Diffusion in Reservoirs for Well Interconnection Assessments
Authors: Margarita Mayoral-Villa, J. Klapp, L. Di G. Sigalotti, J. E. V. Guzmán
Abstract:
Automated learning techniques are widely applied in the energy sector to address challenging problems from a practical point of view. To this end, we discuss the implementation of a Message Passing algorithm (MPNN)within a Graph Neural Network(GNN)to leverage the neighborhood of a set of nodes during the aggregation process. This approach enables the characterization of multiphase diffusion processes in the reservoir, such that the flow paths underlying the interconnections between multiple wells may be inferred from previously available data on flow rates and bottomhole pressures. The results thus obtained compare favorably with the predictions produced by the Reduced Order Capacitance-Resistance Models (CRM) and suggest the potential of MPNNs to enhance the robustness of the forecasts while improving the computational efficiency.Keywords: multiphase diffusion, message passing neural network, well interconnection, interwell connectivity, graph neural network, capacitance-resistance models
Procedia PDF Downloads 1495898 The Roles of Pay Satisfaction and Intent to Leave on Counterproductive Work Behavior among Non-Academic University Employees
Authors: Abiodun Musbau Lawal, Sunday Samson Babalola, Uzor Friday Ordu
Abstract:
Issue of employees counterproductive work behavior in government owned organization in emerging economies has continued to be a major concern. This study investigated the factors of pay satisfaction, intent to leave and age as predictors of counterproductive work behavior among non-academic employee in a Nigerian federal government owned university. A sample of 200 non-academic employees completed questionnaires. Hierarchical multiple regression was conducted to determine the contribution of each of the predictor variables on the criterion variable on counterproductive work behavior. Results indicate that age of participants (β = -.18; p < .05) significantly independently predicted CWB by accounting for 3% of the explained variance. Addition of pay satisfaction (β = -.14; p < .05) significantly accounted for 5% of the explained variance, while intent to leave (β = -.17; p < .05) further resulted in 8% of the explained variance in counterproductive work behavior. The importance of these findings with regards to reduction in counterproductive work behavior is highlighted.Keywords: counterproductive, work behaviour, pay satisfaction, intent to leave
Procedia PDF Downloads 3855897 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 2195896 Restoration and Conservation of Historical Textiles Using Covalently Immobilized Enzymes on Nanoparticles
Authors: Mohamed Elbehery
Abstract:
Historical textiles in the burial environment or in museums are exposed to many types of stains and dirt that are associated with historical textiles by multiple chemical bonds that cause damage to historical textiles. The cleaning process must be carried out with great care, with no irreversible damage, and sediments removed without affecting the original material of the surface being cleaned. Science and technology continue to provide innovative systems in the bio-cleaning process (using pure enzymes) of historical textiles and artistic surfaces. Lipase and α-amylase were immobilized on nanoparticles of alginate/κ-carrageenan nanoparticle complex and used in historical textiles cleaning. Preparation of nanoparticles, activation, and enzymes immobilization were characterized. Optimization of loading time and units of the two enzymes were done. It was found that, the optimum time and units of amylase were 4 hrs and 25U, respectively. While, the optimum time and units of lipase were 3 hrs and 15U, respectively. The methods used to examine the fibers using a scanning electron microscope equipped with an X-ray energy dispersal unit: SEM with EDX unit.Keywords: nanoparticles, enzymes, immobilization, textiles
Procedia PDF Downloads 995895 Noise and Thermal Analyses of Memristor-Based Phase Locked Loop Integrated Circuit
Authors: Naheem Olakunle Adesina
Abstract:
The memristor is considered as one of the promising candidates for mamoelectronic engineering and applications. Owing to its high compatibility with CMOS, nanoscale size, and low power consumption, memristor has been employed in the design of commonly used circuits such as phase-locked loop (PLL). In this paper, we designed a memristor-based loop filter (LF) together with other components of PLL. Following this, we evaluated the noise-rejection feature of loop filter by comparing the noise levels of input and output signals of the filter. Our SPICE simulation results showed that memristor behaves like a linear resistor at high frequencies. The result also showed that loop filter blocks the high-frequency components from phase frequency detector so as to provide a stable control voltage to the voltage controlled oscillator (VCO). In addition, we examined the effects of temperature on the performance of the designed phase locked loop circuit. A critical temperature, where there is frequency drift of VCO as a result of variations in control voltage, is identified. In conclusion, the memristor is a suitable choice for nanoelectronic systems owing to a small area, low power consumption, dense nature, high switching speed, and endurance. The proposed memristor-based loop filter, together with other components of the phase locked loop, can be designed using memristive emulator and EDA tools in current CMOS technology and simulated.Keywords: Fast Fourier Transform, hysteresis curve, loop filter, memristor, noise, phase locked loop, voltage controlled oscillator
Procedia PDF Downloads 1865894 The Factors Predicting Credibility of News in Social Media in Thailand
Authors: Ekapon Thienthaworn
Abstract:
This research aims to study the reliability of the forecasting factor in social media by using survey research methods with questionnaires. The sampling is the group of undergraduate students in Bangkok. A multiple-step random number of 400 persons, data analysis are descriptive statistics with multivariate regression analysis. The research found the average of the overall trust at the intermediate level for reading the news in social media and the results of the multivariate regression analysis to find out the factors that forecast credibility of the media found the only content that has the power to forecast reliability of undergraduate students in Bangkok to reading the news on social media at the significance level.at 0.05.These can be factors with forecasts reliability of news in social media by a variable that has the highest influence factor of the media content and the speed is also important for reliability of the news.Keywords: credibility of news, behaviors and attitudes, social media, web board
Procedia PDF Downloads 4685893 Execution of Optimization Algorithm in Cascaded H-Bridge Multilevel Inverter
Authors: M. Suresh Kumar, K. Ramani
Abstract:
This paper proposed the harmonic elimination of Cascaded H-Bridge Multi-Level Inverter by using Selective Harmonic Elimination-Pulse Width Modulation method programmed with Particle Swarm Optimization algorithm. PSO method determine proficiently the required switching angles to eliminate low order harmonics up to the 11th order from the inverter output voltage waveform while keeping the magnitude of the fundamental harmonics at the desired value. Results demonstrate that the proposed method does efficiently eliminate a great number of specific harmonics and the output voltage is resulted in minimum Total Harmonic Distortion. The results shown that the PSO algorithm attain successfully to the global solution faster than other algorithms.Keywords: multi-level inverter, Selective Harmonic Elimination Pulse Width Modulation (SHEPWM), Particle Swarm Optimization (PSO), Total Harmonic Distortion (THD)
Procedia PDF Downloads 6035892 Fast Tumor Extraction Method Based on Nl-Means Filter and Expectation Maximization
Authors: Sandabad Sara, Sayd Tahri Yassine, Hammouch Ahmed
Abstract:
The development of science has allowed computer scientists to touch the medicine and bring aid to radiologists as we are presenting it in our article. Our work focuses on the detection and localization of tumors areas in the human brain; this will be a completely automatic without any human intervention. In front of the huge volume of MRI to be treated per day, the radiologist can spend hours and hours providing a tremendous effort. This burden has become less heavy with the automation of this step. In this article we present an automatic and effective tumor detection, this work consists of two steps: the first is the image filtering using the filter Nl-means, then applying the expectation maximization algorithm (EM) for retrieving the tumor mask from the brain MRI and extracting the tumor area using the mask obtained from the second step. To prove the effectiveness of this method multiple evaluation criteria will be used, so that we can compare our method to frequently extraction methods used in the literature.Keywords: MRI, Em algorithm, brain, tumor, Nl-means
Procedia PDF Downloads 3365891 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: data augmentation, mutex task generation, meta-learning, text classification.
Procedia PDF Downloads 945890 Implementation of a Culturally Responsive Home Visiting Framework in Head Start Teacher Professional Development
Authors: Meilan Jin, Mary Jane Moran
Abstract:
This study aims to introduce the framework of culturally responsive home visiting (CRHV) to head start teacher professional sessions in the Southeastern of the US and investigate its influence on the evolving beliefs of teachers about their roles and relationships with families in-home visits. The framework orients teachers to an effective way of taking on the role of learner to listen for spoken and unspoken needs and look for family strengths. In addition, it challenges the deficit model that is grounded on 'cultural deprivation,' and it stresses the value of family cultures and advocates equal, collaborative parent-teacher relationships. The home visit reflection papers and focus group transcriptions of eight teachers have been collected since 2010 throughout a five-year longitudinal collaboration with them. Reflection papers were written by the teachers before and after introducing the CRHV framework, including the details of visit purposes and actions and their plans for later home visits. Particularly, the CRHV framework guided the teachers to listen and look for information about family-living environments; parent-child interactions; child-rearing practices; and parental beliefs, values, and needs. Two focus groups were organized in 2014 by asking the teachers to read their written reflection papers and then discussing their shared beliefs and experiences of home visits in recent years. The average length of the discussions was one hour, and the discussions were audio-recorded and transcribed verbatim. Moreover, the data were analyzed using constant comparative analysis, and the analysis was verified through (a) the uses of multiple data sources, (b) the involvement of multiple researchers, (c) coding checks, and (d) the provisions of the thick descriptions of the findings. The study findings corroborate that the teachers become to reposition themselves as 'knowledge seekers' through reorienting their cynosure toward 'setting stones' to learn, grow, and change rather than framing their home visits. The teachers also continually engage in careful listening, observing, questioning, and dialoguing, and these actions reflect their care toward parents. The value of teamwork with parents is advocated, and the teachers recognize that when parents feel empowered, they are active and committed to doing more for their children, which can further advantage proactive long-term parent-teacher collaborations. The study findings also validate that the framework is influential for educators to provide the experiences of home visiting that is culturally responsive and to share collaborative relationships with caregivers. The long-term impact of the framework further implies that teachers continue to put themselves in the position of evolving, including beliefs and actions, to better work with children and families who are culturally, ethnically, and linguistically different from them. This framework can be applicable to educators and professionals who are looking for avenues to bridge the relationship between home and school and parents and teachers.Keywords: culturally responsive home visit, early childhood education, parent–teacher collaboration, teacher professional development
Procedia PDF Downloads 975889 Cervical Cell Classification Using Random Forests
Authors: Dalwinder Singh, Amandeep Verma, Manpreet Kaur, Birmohan Singh
Abstract:
The detection of pre-cancerous changes using a Pap smear test of cervical cell is the important step for the early diagnosis of cervical cancer. The Pap smear test consists of a sample of human cells taken from the cervix which are analysed to detect cancerous and pre-cancerous stage of the given subject. The manual analysis of these cells is labor intensive and time consuming process which relies on expert cytotechnologist. In this paper, a computer assisted system for the automated analysis of the cervical cells has been proposed. We propose a morphology based approach to the nucleus detection and segmentation of the cytoplasmic region of the given single or multiple overlapped cell. Further, various texture and region based features are calculated from these cells to classify these into normal and abnormal cell. Experimental results on public available dataset show that our system has achieved satisfactory success rate.Keywords: cervical cancer, cervical tissue, mathematical morphology, texture features
Procedia PDF Downloads 5265888 A Study on the Reliability Evaluation of a Timer Card for Air Dryer of the Railway Vehicle
Authors: Chul Su Kim, Jun Ku Lee, Won Jun Lee
Abstract:
The EMU (electric multiple unit) vehicle timer card is a PCB (printed circuit board) for controlling the air-dryer to remove the moisture of the generated air from the air compressor of the braking device. This card is exposed to the lower part of the railway vehicle, so it is greatly affected by the external environment such as temperature and humidity. The main cause of the failure of this timer card is deterioration of soldering area of the PCB surface due to temperature and humidity. Therefore, in the viewpoint of preventive maintenance, it is important to evaluate the reliability of the timer card and predict the replacement cycle to secure the safety of the air braking device is one of the main devices for driving. In this study, the existing and the improved products were evaluated on the reliability through ALT (accelerated life test). In addition, the acceleration factor by the 'Coffin-Manson' equation was obtained, and the remaining lifetime was compared and examined.Keywords: reliability evaluation, timer card, Printed Circuit Board, Accelerated Life Test
Procedia PDF Downloads 2795887 Impacts of Racialization: Exploring the Relationships between Racial Discrimination, Racial Identity, and Activism
Authors: Brianna Z. Ross, Jonathan N. Livingston
Abstract:
Given that discussions of racism and racial tensions have become more salient, there is a need to evaluate the impacts of racialization among Black individuals. Racial discrimination has become one of the most common experiences within the Black American population. Likewise, Black individuals have indicated a need to address their racial identities at an earlier age than their non-Black peers. Further, Black individuals have been found at the forefront of multiple social and political movements, including but not limited to the Civil Rights Movement, Black Lives Matter, MeToo, and Say Her Name. Moreover, the present study sought to explore the predictive relationships that exist between racial discrimination, racial identity, and activism in the Black community. The results of standard and hierarchical regression analyses revealed that racial discrimination and racial identity significantly predict each other, but only racial discrimination is a significant predictor for the relationship to activism. Nonetheless, the results from this study will provide a basis for social scientists to better understand the impacts of racialization on the Black American population.Keywords: activism, racialization, racial discrimination, racial identity
Procedia PDF Downloads 1525886 On the Performance of Improvised Generalized M-Estimator in the Presence of High Leverage Collinearity Enhancing Observations
Authors: Habshah Midi, Mohammed A. Mohammed, Sohel Rana
Abstract:
Multicollinearity occurs when two or more independent variables in a multiple linear regression model are highly correlated. The ridge regression is the commonly used method to rectify this problem. However, the ridge regression cannot handle the problem of multicollinearity which is caused by high leverage collinearity enhancing observation (HLCEO). Since high leverage points (HLPs) are responsible for inducing multicollinearity, the effect of HLPs needs to be reduced by using Generalized M estimator. The existing GM6 estimator is based on the Minimum Volume Ellipsoid (MVE) which tends to swamp some low leverage points. Hence an improvised GM (MGM) estimator is presented to improve the precision of the GM6 estimator. Numerical example and simulation study are presented to show how HLPs can cause multicollinearity. The numerical results show that our MGM estimator is the most efficient method compared to some existing methods.Keywords: identification, high leverage points, multicollinearity, GM-estimator, DRGP, DFFITS
Procedia PDF Downloads 2625885 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines
Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka
Abstract:
To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps
Procedia PDF Downloads 1515884 A Comprehensive Study on CO₂ Capture and Storage: Advances in Technology and Environmental Impact Mitigation
Authors: Oussama Fertaq
Abstract:
This paper investigates the latest advancements in CO₂ capture and storage (CCS) technologies, which are vital for addressing the growing challenge of climate change. The study focuses on multiple techniques for CO₂ capture, including chemical absorption, membrane separation, and adsorption, analyzing their efficiency, scalability, and environmental impact. The research further explores geological storage options such as deep saline aquifers and depleted oil fields, providing insights into the challenges and opportunities presented by each method. This paper emphasizes the importance of integrating CCS with existing industrial processes to reduce greenhouse gas emissions effectively. It also discusses the economic and policy frameworks required to promote wider adoption of CCS technologies. The findings of this study offer a comprehensive view of the potential of CCS in achieving global climate goals, particularly in hard-to-abate sectors such as energy and manufacturing.Keywords: CO₂ capture, carbon storage, climate change mitigation, carbon sequestration, environmental sustainability
Procedia PDF Downloads 125883 Estimation of Chronic Kidney Disease Using Artificial Neural Network
Authors: Ilker Ali Ozkan
Abstract:
In this study, an artificial neural network model has been developed to estimate chronic kidney failure which is a common disease. The patients’ age, their blood and biochemical values, and 24 input data which consists of various chronic diseases are used for the estimation process. The input data have been subjected to preprocessing because they contain both missing values and nominal values. 147 patient data which was obtained from the preprocessing have been divided into as 70% training and 30% testing data. As a result of the study, artificial neural network model with 25 neurons in the hidden layer has been found as the model with the lowest error value. Chronic kidney failure disease has been able to be estimated accurately at the rate of 99.3% using this artificial neural network model. The developed artificial neural network has been found successful for the estimation of chronic kidney failure disease using clinical data.Keywords: estimation, artificial neural network, chronic kidney failure disease, disease diagnosis
Procedia PDF Downloads 4475882 Virtual Computing Lab for Phonics Development among Deaf Students
Authors: Ankita R. Bansal, Naren S. Burade
Abstract:
Idea is to create a cloud based virtual lab for Deaf Students, “A language acquisition program using Visual Phonics and Cued Speech” using VMware Virtual Lab. This lab will demonstrate students the sounds of letters associated with the Language, building letter blocks, making words, etc Virtual labs are used for demos, training, for the Lingual development of children in their vernacular language. The main potential benefits are reduced labour and hardware costs, faster response times to users. Virtual Computing Labs allows any of the software as a service solutions, virtualization solutions, and terminal services solutions available today to offer as a service on demand, where a single instance of the software runs on the cloud and services multiple end users. VMWare, XEN, MS Virtual Server, Virtuoso, and Citrix are typical examples.Keywords: visual phonics, language acquisition, vernacular language, cued speech, virtual lab
Procedia PDF Downloads 5995881 A Low Phase Noise CMOS LC Oscillator with Tail Current-Shaping
Authors: Amir Mahdavi
Abstract:
In this paper, a circuit topology of voltage-controlled oscillators (VCO) which is suitable for ultra-low-phase noise operations is introduced. To do so, a new low phase noise cross-coupled oscillator by using the general topology of cross-coupled oscillator and adding a differential stage for tail current shaping is designed. In addition, a tail current shaping technique to improve phase noise in differential LC VCOs is presented. The tail current becomes large when the oscillator output voltage arrives at the maximum or minimum value and when the sensitivity of the output phase to the noise is the smallest. Also, the tail current becomes small when the phase noise sensitivity is large. The proposed circuit does not use extra power and extra noisy active devices. Furthermore, this topology occupies small area. Simulation results show the improvement in phase noise by 2.5dB under the same conditions and at the carrier frequency of 1 GHz for GSM applications. The power consumption of the proposed circuit is 2.44 mW and the figure of merit (FOM) with -192.2 dBc/Hz is achieved for the new oscillator.Keywords: LC oscillator, low phase noise, current shaping, diff mode
Procedia PDF Downloads 6005880 An Inquiry into the Usage of Complex Systems Models to Examine the Effects of the Agent Interaction in a Political Economic Environment
Authors: Ujjwall Sai Sunder Uppuluri
Abstract:
Group theory is a powerful tool that researchers can use to provide a structural foundation for their Agent Based Models. These Agent Based models are argued by this paper to be the future of the Social Science Disciplines. More specifically, researchers can use them to apply evolutionary theory to the study of complex social systems. This paper illustrates one such example of how theoretically an Agent Based Model can be formulated from the application of Group Theory, Systems Dynamics, and Evolutionary Biology to analyze the strategies pursued by states to mitigate risk and maximize usage of resources to achieve the objective of economic growth. This example can be applied to other social phenomena and this makes group theory so useful to the analysis of complex systems, because the theory provides the mathematical formulaic proof for validating the complex system models that researchers build and this will be discussed by the paper. The aim of this research, is to also provide researchers with a framework that can be used to model political entities such as states on a 3-dimensional plane. The x-axis representing resources (tangible and intangible) available to them, y the risks, and z the objective. There also exist other states with different constraints pursuing different strategies to climb the mountain. This mountain’s environment is made up of risks the state faces and resource endowments. This mountain is also layered in the sense that it has multiple peaks that must be overcome to reach the tallest peak. A state that sticks to a single strategy or pursues a strategy that is not conducive to the climbing of that specific peak it has reached is not able to continue advancement. To overcome the obstacle in the state’s path, it must innovate. Based on the definition of a group, we can categorize each state as being its own group. Each state is a closed system, one which is made up of micro level agents who have their own vectors and pursue strategies (actions) to achieve some sub objectives. The state also has an identity, the inverse being anarchy and/or inaction. Finally, the agents making up a state interact with each other through competition and collaboration to mitigate risks and achieve sub objectives that fall within the primary objective. Thus, researchers can categorize the state as an organism that reflects the sum of the output of the interactions pursued by agents at the micro level. When states compete, they employ a strategy and that state which has the better strategy (reflected by the strategies pursued by her parts) is able to out-compete her counterpart to acquire some resource, mitigate some risk or fulfil some objective. This paper will attempt to illustrate how group theory combined with evolutionary theory and systems dynamics can allow researchers to model the long run development, evolution, and growth of political entities through the use of a bottom up approach.Keywords: complex systems, evolutionary theory, group theory, international political economy
Procedia PDF Downloads 139