Search results for: single instruction multiple data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30223

Search results for: single instruction multiple data

30013 Heat Transfer Performance of a Small Cold Plate with Uni-Directional Porous Copper for Cooling Power Electronics

Authors: K. Yuki, R. Tsuji, K. Takai, S. Aramaki, R. Kibushi, N. Unno, K. Suzuki

Abstract:

A small cold plate with uni-directional porous copper is proposed for cooling power electronics such as an on-vehicle inverter with the heat generation of approximately 500 W/cm2. The uni-directional porous copper with the pore perpendicularly orienting the heat transfer surface is soldered to a grooved heat transfer surface. This structure enables the cooling liquid to evaporate in the pore of the porous copper and then the vapor to discharge through the grooves. In order to minimize the cold plate, a double flow channel concept is introduced for the design of the cold plate. The cold plate consists of a base plate, a spacer, and a vapor discharging plate, totally 12 mm in thickness. The base plate has multiple nozzles of 1.0 mm in diameter for the liquid supply and 4 slits of 2.0 mm in width for vapor discharging, and is attached onto the top surface of the porous copper plate of 20 mm in diameter and 5.0 mm in thickness. The pore size is 0.36 mm and the porosity is 36 %. The cooling liquid flows into the porous copper as an impinging jet flow from the multiple nozzles, and then the vapor, which is generated in the pore, is discharged through the grooves and the vapor slits outside the cold plate. A heated test section consists of the cold plate, which was explained above, and a heat transfer copper block with 6 cartridge heaters. The cross section of the heat transfer block is reduced in order to increase the heat flux. The top surface of the block is the grooved heat transfer surface of 10 mm in diameter at which the porous copper is soldered. The grooves are fabricated like latticework, and the width and depth are 1.0 mm and 0.5 mm, respectively. By embedding three thermocouples in the cylindrical part of the heat transfer block, the temperature of the heat transfer surface ant the heat flux are extrapolated in a steady state. In this experiment, the flow rate is 0.5 L/min and the flow velocity at each nozzle is 0.27 m/s. The liquid inlet temperature is 60 °C. The experimental results prove that, in a single-phase heat transfer regime, the heat transfer performance of the cold plate with the uni-directional porous copper is 2.1 times higher than that without the porous copper, though the pressure loss with the porous copper also becomes higher than that without the porous copper. As to the two-phase heat transfer regime, the critical heat flux increases by approximately 35% by introducing the uni-directional porous copper, compared with the CHF of the multiple impinging jet flow. In addition, we confirmed that these heat transfer data was much higher than that of the ordinary single impinging jet flow. These heat transfer data prove high potential of the cold plate with the uni-directional porous copper from the view point of not only the heat transfer performance but also energy saving.

Keywords: cooling, cold plate, uni-porous media, heat transfer

Procedia PDF Downloads 273
30012 Automatic Identification and Monitoring of Wildlife via Computer Vision and IoT

Authors: Bilal Arshad, Johan Barthelemy, Elliott Pilton, Pascal Perez

Abstract:

Getting reliable, informative, and up-to-date information about the location, mobility, and behavioural patterns of animals will enhance our ability to research and preserve biodiversity. The fusion of infra-red sensors and camera traps offers an inexpensive way to collect wildlife data in the form of images. However, extracting useful data from these images, such as the identification and counting of animals remains a manual, time-consuming, and costly process. In this paper, we demonstrate that such information can be automatically retrieved by using state-of-the-art deep learning methods. Another major challenge that ecologists are facing is the recounting of one single animal multiple times due to that animal reappearing in other images taken by the same or other camera traps. Nonetheless, such information can be extremely useful for tracking wildlife and understanding its behaviour. To tackle the multiple count problem, we have designed a meshed network of camera traps, so they can share the captured images along with timestamps, cumulative counts, and dimensions of the animal. The proposed method takes leverage of edge computing to support real-time tracking and monitoring of wildlife. This method has been validated in the field and can be easily extended to other applications focusing on wildlife monitoring and management, where the traditional way of monitoring is expensive and time-consuming.

Keywords: computer vision, ecology, internet of things, invasive species management, wildlife management

Procedia PDF Downloads 112
30011 Biophysically Motivated Phylogenies

Authors: Catherine Felce, Lior Pachter

Abstract:

Current methods for building phylogenetic trees from gene expression data consider mean expression levels. With single-cell technologies, we can leverage more information about cell dynamics by considering the entire distribution of gene expression across cells. Using biophysical modeling, we propose a method for constructing phylogenetic trees from scRNA-seq data, building on Felsenstein's method of continuous characters. This method can highlight genes whose level of expression may be unchanged between species, but whose rates of transcription/decay may have evolved over time.

Keywords: phylogenetics, single-cell, biophysical modeling, transcription

Procedia PDF Downloads 8
30010 A Near-Optimal Domain Independent Approach for Detecting Approximate Duplicates

Authors: Abdelaziz Fellah, Allaoua Maamir

Abstract:

We propose a domain-independent merging-cluster filter approach complemented with a set of algorithms for identifying approximate duplicate entities efficiently and accurately within a single and across multiple data sources. The near-optimal merging-cluster filter (MCF) approach is based on the Monge-Elkan well-tuned algorithm and extended with an affine variant of the Smith-Waterman similarity measure. Then we present constant, variable, and function threshold algorithms that work conceptually in a divide-merge filtering fashion for detecting near duplicates as hierarchical clusters along with their corresponding representatives. The algorithms take recursive refinement approaches in the spirit of filtering, merging, and updating, cluster representatives to detect approximate duplicates at each level of the cluster tree. Experiments show a high effectiveness and accuracy of the MCF approach in detecting approximate duplicates by outperforming the seminal Monge-Elkan’s algorithm on several real-world benchmarks and generated datasets.

Keywords: data mining, data cleaning, approximate duplicates, near-duplicates detection, data mining applications and discovery

Procedia PDF Downloads 362
30009 Scheduling in a Single-Stage, Multi-Item Compatible Process Using Multiple Arc Network Model

Authors: Bokkasam Sasidhar, Ibrahim Aljasser

Abstract:

The problem of finding optimal schedules for each equipment in a production process is considered, which consists of a single stage of manufacturing and which can handle different types of products, where changeover for handling one type of product to the other type incurs certain costs. The machine capacity is determined by the upper limit for the quantity that can be processed for each of the products in a set up. The changeover costs increase with the number of set ups and hence to minimize the costs associated with the product changeover, the planning should be such that similar types of products should be processed successively so that the total number of changeovers and in turn the associated set up costs are minimized. The problem of cost minimization is equivalent to the problem of minimizing the number of set ups or equivalently maximizing the capacity utilization in between every set up or maximizing the total capacity utilization. Further, the production is usually planned against customers’ orders, and generally different customers’ orders are assigned one of the two priorities – “normal” or “priority” order. The problem of production planning in such a situation can be formulated into a Multiple Arc Network (MAN) model and can be solved sequentially using the algorithm for maximizing flow along a MAN and the algorithm for maximizing flow along a MAN with priority arcs. The model aims to provide optimal production schedule with an objective of maximizing capacity utilization, so that the customer-wise delivery schedules are fulfilled, keeping in view the customer priorities. Algorithms have been presented for solving the MAN formulation of the production planning with customer priorities. The application of the model is demonstrated through numerical examples.

Keywords: scheduling, maximal flow problem, multiple arc network model, optimization

Procedia PDF Downloads 380
30008 Effects of E-Learning Mode of Instruction and Conventional Mode of Instruction on Student’s Achievement in English Language in Senior Secondary Schools, Ibadan Municipal, Nigeria

Authors: Ibode Osa Felix

Abstract:

The use of e-Learning is presently intensified in the academic world following the outbreak of the Covid-19 pandemic in early 2020. Hitherto, e-learning had made its debut in teaching and learning many years ago when it emerged as an aspect of Computer Based Teaching, but never before has its patronage become so important and popular as currently obtains. Previous studies revealed that there is an ongoing debate among researchers on the efficacy of the E-learning mode of instruction over the traditional teaching method. Therefore, the study examined the effect of E-learning and Conventional Mode of Instruction on Students Achievement in the English Language. The study is a quasi-experimental study in which 230 students, from three public secondary schools, were selected through a simple random sampling technique. Three instruments were developed, namely, E-learning Instructional Guide (ELIG), Conventional Method of Instructional Guide (CMIG), and English Language Achievement Test (ELAT). The result revealed that students taught through the conventional method had better results than students taught online. The result also shows that girls taught with the conventional method of teaching performed better than boys in the English Language. The study, therefore, recommended that effort should be made by the educational authorities in Nigeria to provide internet facilities to enhance practices among learners and provide electricity to power e-learning equipment in the secondary schools. This will boost e-learning practices among teachers and students and consequently overtake conventional method of teaching in due course.

Keywords: e-learning, conventional method of teaching, achievement in english, electricity

Procedia PDF Downloads 141
30007 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 146
30006 Using Multiple Intelligences Theory to Develop Thai Language Skill

Authors: Bualak Naksongkaew

Abstract:

The purposes of this study were to compare pre- and post-test achievement of Thai language skills. The samples consisted of 40 tenth grader of Secondary Demonstration School of Suan Sunandha Rajabhat University in the first semester of the academic year 2010. The researcher prepared the Thai lesson plans, the pre- and post-achievement test at the end program. Data analyses were carried out using means, standard deviations and descriptive statistics, independent samples t-test analysis for comparison pre- and post-test. The study showed that there were a statistically significant difference at α= 0.05; therefore the use multiple intelligences theory can develop Thai languages skills. The results after using the multiple intelligences theory for Thai lessons had higher level than standard.

Keywords: multiple intelligences theory, Thai language skills, development, pre- and post-test achievement

Procedia PDF Downloads 401
30005 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal

Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader

Abstract:

DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.

Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform

Procedia PDF Downloads 50
30004 Hydration of Protein-RNA Recognition Sites

Authors: Amita Barik, Ranjit Prasad Bahadur

Abstract:

We investigate the role of water molecules in 89 protein-RNA complexes taken from the Protein Data Bank. Those with tRNA and single-stranded RNA are less hydrated than with duplex or ribosomal proteins. Protein-RNA interfaces are hydrated less than protein-DNA interfaces, but more than protein-protein interfaces. Majority of the waters at protein-RNA interfaces makes multiple H-bonds; however, a fraction does not make any. Those making Hbonds have preferences for the polar groups of RNA than its partner protein. The spatial distribution of waters makes interfaces with ribosomal proteins and single-stranded RNA relatively ‘dry’ than interfaces with tRNA and duplex RNA. In contrast to protein-DNA interfaces, mainly due to the presence of the 2’OH, the ribose in protein-RNA interfaces is hydrated more than the phosphate or the bases. The minor groove in protein-RNA interfaces is hydrated more than the major groove, while in protein-DNA interfaces it is reverse. The strands make the highest number of water-mediated H-bonds per unit interface area followed by the helices and the non-regular structures. The preserved waters at protein-RNA interfaces make higher number of H-bonds than the other waters. Preserved waters contribute toward the affinity in protein-RNA recognition and should be carefully treated while engineering protein-RNA interfaces.

Keywords: h-bonds, minor-major grooves, preserved water, protein-RNA interfaces

Procedia PDF Downloads 261
30003 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks

Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin

Abstract:

Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.

Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network

Procedia PDF Downloads 111
30002 Transient Voltage Distribution on the Single Phase Transmission Line under Short Circuit Fault Effect

Authors: A. Kojah, A. Nacaroğlu

Abstract:

Single phase transmission lines are used to transfer data or energy between two users. Transient conditions such as switching operations and short circuit faults cause the generation of the fluctuation on the waveform to be transmitted. Spatial voltage distribution on the single phase transmission line may change owing to the position and duration of the short circuit fault in the system. In this paper, the state space representation of the single phase transmission line for short circuit fault and for various types of terminations is given. Since the transmission line is modeled in time domain using distributed parametric elements, the mathematical representation of the event is given in state space (time domain) differential equation form. It also makes easy to solve the problem because of the time and space dependent characteristics of the voltage variations on the distributed parametrically modeled transmission line.

Keywords: energy transmission, transient effects, transmission line, transient voltage, RLC short circuit, single phase

Procedia PDF Downloads 200
30001 A Study of Anthropometric Correlation between Upper and Lower Limb Dimensions in Sudanese Population

Authors: Altayeb Abdalla Ahmed

Abstract:

Skeletal phenotype is a product of a balanced interaction between genetics and environmental factors throughout different life stages. Therefore, interlimb proportions are variable between populations. Although interlimb proportion indices have been used in anthropology in assessing the influence of various environmental factors on limbs, an extensive literature review revealed that there is a paucity of published research assessing interlimb part correlations and possibility of reconstruction. Hence, this study aims to assess the relationships between upper and lower limb parts and develop regression formulae to reconstruct the parts from one another. The left upper arm length, ulnar length, wrist breadth, hand length, hand breadth, tibial length, bimalleolar breadth, foot length, and foot breadth of 376 right-handed subjects, comprising 187 males and 189 females (aged 25-35 years), were measured. Initially, the data were analyzed using basic univariate analysis and independent t-tests; then sex-specific simple and multiple linear regression models were used to estimate upper limb parts from lower limb parts and vice-versa. The results of this study indicated significant sexual dimorphism for all variables. The results indicated a significant correlation between the upper and lower limbs parts (p < 0.01). Linear and multiple (stepwise) regression equations were developed to reconstruct the limb parts in the presence of a single or multiple dimension(s) from the other limb. Multiple stepwise regression equations generated better reconstructions than simple equations. These results are significant in forensics as it can aid in identification of multiple isolated limb parts particularly during mass disasters and criminal dismemberment. Although a DNA analysis is the most reliable tool for identification, its usage has multiple limitations in undeveloped countries, e.g., cost, facility availability, and trained personnel. Furthermore, it has important implication in plastic and orthopedic reconstructive surgeries. This study is the only reported study assessing the correlation and prediction capabilities between many of the upper and lower dimensions. The present study demonstrates a significant correlation between the interlimb parts in both sexes, which indicates a possibility to reconstruction using regression equations.

Keywords: anthropometry, correlation, limb, Sudanese

Procedia PDF Downloads 270
30000 Application of Compressed Sensing Method for Compression of Quantum Data

Authors: M. Kowalski, M. Życzkowski, M. Karol

Abstract:

Current quantum key distribution systems (QKD) offer low bit rate of up to single MHz. Compared to conventional optical fiber links with multiple GHz bitrates, parameters of recent QKD systems are significantly lower. In the article we present the conception of application of the Compressed Sensing method for compression of quantum information. The compression methodology as well as the signal reconstruction method and initial results of improving the throughput of quantum information link are presented.

Keywords: quantum key distribution systems, fiber optic system, compressed sensing

Procedia PDF Downloads 660
29999 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 191
29998 Teacher Professional Development in Saudi Arabia through the Implementation of Universal Design for Learning

Authors: Majed A. Alsalem

Abstract:

Universal Design for Learning (UDL) is common theme in education across the US and an influential model and framework that enables students in general and particularly students who are deaf and hard of hearing (DHH) to access the general education curriculum. UDL helps teachers determine how information will be presented to students and how to keep students engaged. Moreover, UDL helps students to express their understanding and knowledge to others. UDL relies on technology to promote students' interaction with content and their communication of knowledge. This study included 120 DHH students who received daily instruction based on UDL principles. This study presents the results of the study and discusses its implications for the integration of UDL in day-to-day practice as well as in the country's education policy. UDL is a Western concept that began and grew in the US, and it has just begun to transfer to other countries such as Saudi Arabia. It will be very important to researchers, practitioners, and educators to see how UDL is being implemented in a new place with a different culture. UDL is a framework that is built to provide multiple means of engagement, representation, and action and expression that should be part of curricula and lessons for all students. The purpose of this study is to investigate the variables associated with the implementation of UDL in Saudi Arabian schools and identify the barriers that could prevent the implementation of UDL. Therefore, this study used a mixed methods design that use both quantitative and qualitative methods. More insights will be gained by including both quantitative and qualitative rather than using a single method. By having methods that different concepts and approaches, the databases will be enriched. This study uses levels of collecting date through two stages in order to insure that the data comes from multiple ways to mitigate validity threats and establishing trustworthiness in the findings. The rationale and significance of this study is that it will be the first known research that targets UDL in Saudi Arabia. Furthermore, it will deal with UDL in depth to set the path for further studies in the Middle East. From a perspective of content, this study considers teachers’ implementation knowledge, skills, and concerns of implementation. This study deals with effective instructional designs that have not been presented in any conferences, workshops, teacher preparation and professional development programs in Saudi Arabia. Specifically, Saudi Arabian schools are challenged to design inclusive schools and practices as well as to support all students’ academic skills development. The total participants in stage one were 336 teachers of DHH students. The results of the intervention indicated significant differences among teachers before and after taking the training sessions associated with their understanding and level of concern. Teachers have indicated interest in knowing more about UDL and adopting it into their practices; they reported that UDL has benefits that will enhance their performance for supporting student learning.

Keywords: deaf and hard of hearing, professional development, Saudi Arabia, universal design for learning

Procedia PDF Downloads 413
29997 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models

Authors: Yoonsuh Jung

Abstract:

As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an "optimal" value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.

Keywords: cross validation, parameter averaging, parameter selection, regularization parameter search

Procedia PDF Downloads 390
29996 Investigating Secondary Students’ Attitude towards Learning English

Authors: Pinkey Yaqub

Abstract:

The aim of this study was to investigate secondary (grades IX and X) students’ attitudes towards learning the English language based on the medium of instruction of the school, the gender of the students and the grade level in which they studied. A further aim was to determine students’ proficiency in the English language according to their gender, the grade level and the medium of instruction of the school. A survey was used to investigate the attitudes of secondary students towards English language learning. Simple random sampling was employed to obtain a representative sample of the target population for the research study as a comprehensive list of established English medium schools, and newly established English medium schools were available. A questionnaire ‘Attitude towards English Language Learning’ (AtELL) was adapted from a research study on Libyan secondary school students’ attitudes towards learning English language. AtELL was reviewed by experts (n=6) and later piloted on a representative sample of secondary students (n= 160). Subsequently, the questionnaire was modified - based on the reviewers’ feedback and lessons learnt during the piloting phase - and directly administered to students of grades 9 and 10 to gather information regarding their attitudes towards learning the English language. Data collection spanned a month and a half. As the data were not normally distributed, the researcher used Mann-Whitney tests to test the hypotheses formulated to investigate students’ attitudes towards learning English as well as proficiency in the language across the medium of instruction of the school, the gender of the students and the grade level of the respondents. Statistical analyses of the data showed that the students of established English medium schools exhibited a positive outlook towards English language learning in terms of the behavioural, cognitive and emotional aspects of attitude. A significant difference was observed in the attitudes of male and female students towards learning English where females showed a more positive attitude in terms of behavioural, cognitive and emotional aspects as compared to their male counterparts. Moreover, grade 10 students had a more positive attitude towards learning English language in terms of behavioural, cognitive and emotional aspects as compared to grade 9 students. Nonetheless, students of newly established English medium schools were more proficient in English as gauged by their examination scores in this subject as compared to their counterparts studying in established English medium schools. Moreover, female students were more proficient in English while students studying in grade 9 were less proficient in English than their seniors studying in grade 10. The findings of this research provide empirical evidence to future researchers wishing to explore the relationship between attitudes towards learning language and variables such as the medium of instruction of the school, gender and the grade level of the students. Furthermore, policymakers might revisit the English curriculum to formulate specific guidelines that promote a positive and gender-balanced outlook towards learning English for male and female students.

Keywords: attitude, behavioral aspect of attitude, cognitive aspect of attitude, emotional aspect of attitude

Procedia PDF Downloads 212
29995 Comparative Parametric Analysis on the Dynamic Response of Fibre Composite Beams with Debonding

Authors: Indunil Jayatilake, Warna Karunasena

Abstract:

Fiber Reinforced Polymer (FRP) composites enjoy an array of applications ranging from aerospace, marine and military to automobile, recreational and civil industry due to their outstanding properties. A structural glass fiber reinforced polymer (GFRP) composite sandwich panel made from E-glass fiber skin and a modified phenolic core has been manufactured in Australia for civil engineering applications. One of the major mechanisms of damage in FRP composites is skin-core debonding. The presence of debonding is of great concern not only because it severely affects the strength but also it modifies the dynamic characteristics of the structure, including natural frequency and vibration modes. This paper deals with the investigation of the dynamic characteristics of a GFRP beam with single and multiple debonding by finite element based numerical simulations and analyses using the STRAND7 finite element (FE) software package. Three-dimensional computer models have been developed and numerical simulations were done to assess the dynamic behavior. The FE model developed has been validated with published experimental, analytical and numerical results for fully bonded as well as debonded beams. A comparative analysis is carried out based on a comprehensive parametric investigation. It is observed that the reduction in natural frequency is more affected by single debonding than the equally sized multiple debonding regions located symmetrically to the single debonding position. Thus it is revealed that a large single debonding area leads to more damage in terms of natural frequency reduction than isolated small debonding zones of equivalent area, appearing in the GFRP beam. Furthermore, the extents of natural frequency shifts seem mode-dependent and do not seem to have a monotonous trend of increasing with the mode numbers.

Keywords: debonding, dynamic response, finite element modelling, novel FRP beams

Procedia PDF Downloads 98
29994 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 82
29993 Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data

Authors: LuoJiaoyang, Yu Hongyang

Abstract:

In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%.

Keywords: multimodal, three modalities, RGB-D, identity verification

Procedia PDF Downloads 50
29992 Using Action Based Research to Examine the Effects of Co-Teaching on Middle School and High School Student Achievement in Math and Language Arts

Authors: Kathleen L. Seifert

Abstract:

Students with special needs are expected to achieve the same academic standards as their general education peers, yet many students with special needs are pulled-out of general content instruction. Because of this, many students with special needs are denied content knowledge from a content expert and instead receive content instruction in a more restrictive setting. Collaborative teaching, where a general education and special education teacher work alongside each other in the same classroom, has become increasingly popular as a means to meet the diverse needs of students in America’s public schools. The idea behind co-teaching is noble; to ensure students with special needs receive content area instruction from a content expert while also receiving the necessary supports to be successful. However, in spite of this noble effort, the effects of co-teaching are not always positive. The reasons why have produced several hypotheses, one of which has to do with lack of proper training and implementation of effective evidence-based co-teaching practices. In order to examine the effects of co-teacher training, eleven teaching pairs from a small mid-western school district in the United States participated in a study. The purpose of the study was to examine the effects of co-teacher training on middle and high school student achievement in Math and Language Arts. A local university instructor provided teachers with training in co-teaching via a three-day workshop. In addition, co-teaching pairs were given the opportunity for direct observation and feedback using the Co-teaching Core Competencies Observation Checklist throughout the academic year. Data are in the process of being collected on both the students enrolled in the co-taught classes as well as on the teachers themselves. Student data compared achievement on standardized assessments and classroom performance across three domains: 1. General education students compared to students with special needs in co-taught classrooms, 2. Students with special needs in classrooms with and without co-teaching, 3. Students in classrooms where teachers were given observation and feedback compared to teachers who refused the observation and feedback. Teacher data compared the perceptions of the co-teaching initiative between teacher pairs who received direct observation and feedback from those who did not. The findings from the study will be shared with the school district and used for program improvement.

Keywords: collabortive teaching, collaboration, co-teaching, professional development

Procedia PDF Downloads 86
29991 Relationship between Effective Classroom Management with Students’ Academic Achievement of EFL of STKIP YPUP

Authors: Eny Syatriana

Abstract:

The purpose of this study is to find out the effective instruction for classroom management, with the main identification of organizing and managing effective learning environments, to identify characteristics of effective lesson planning, identify resources and materials dealing with positive and effective classroom management. Knowing the effective instruction management is one of the characteristics of well managed teacher. The study was carried out in three randomly selected classes of STKIP YPUP in South Sulawesi. The design adopted for the study was a descriptive survey approach. Simple descriptive analysis was used. The major instrument used in this study were student questionnaire, teacher questionnaire, data were gathered with the research instrument and were analyzed, the research question were investigated and two hypothesis were duly tested using t-test statistics. Based on the findings of this research, it was concluded that effective classroom management skills or techniques have strong and positive influence on student achievement.

Keywords: effective classroom management skills, students’ achievement, students academic, effective learning environments

Procedia PDF Downloads 304
29990 Integrating Explicit Instruction and Problem-Solving Approaches for Efficient Learning

Authors: Slava Kalyuga

Abstract:

There are two opposing major points of view on the optimal degree of initial instructional guidance that is usually discussed in the literature by the advocates of the corresponding learning approaches. Using unguided or minimally guided problem-solving tasks prior to explicit instruction has been suggested by productive failure and several other instructional theories, whereas an alternative approach - using fully guided worked examples followed by problem solving - has been demonstrated as the most effective strategy within the framework of cognitive load theory. An integrated approach discussed in this paper could combine the above frameworks within a broader theoretical perspective which would allow bringing together their best features and advantages in the design of learning tasks for STEM education. This paper represents a systematic review of the available empirical studies comparing the above alternative sequences of instructional methods to explore effects of several possible moderating factors. The paper concludes that different approaches and instructional sequences should coexist within complex learning environments. Selecting optimal sequences depends on such factors as specific goals of learner activities, types of knowledge to learn, levels of element interactivity (task complexity), and levels of learner prior knowledge. This paper offers an outline of a theoretical framework for the design of complex learning tasks in STEM education that would integrate explicit instruction and inquiry (exploratory, discovery) learning approaches in ways that depend on a set of defined specific factors.

Keywords: cognitive load, explicit instruction, exploratory learning, worked examples

Procedia PDF Downloads 100
29989 Preservice Science Teachers' Understanding of Equitable Assessment

Authors: Kemal Izci, Ahmet Oguz Akturk

Abstract:

Learning is dependent on cognitive and physical differences as well as other differences such as ethnicity, language, and culture. Furthermore, these differences also influence how students show their learning. Assessment is an integral part of learning and teaching process and is essential for effective instruction. In order to provide effective instruction, teachers need to provide equal assessment opportunities for all students to see their learning difficulties and use them to modify instruction to aid learning. Successful assessment practices are dependent upon the knowledge and value of teachers. Therefore, in order to use assessment to assess and support diverse students learning, preservice and inservice teachers should hold an appropriate understanding of equitable assessment. In order to prepare teachers to help them support diverse student learning, as a first step, this study aims to explore how preservice teachers’ understand equitable assessment. 105 preservice science teachers studying at teacher preparation program in a large university located at Eastern part of Turkey participated in the current study. A questionnaire, preservice teachers’ reflection papers and interviews served as data sources for this study. All collected data qualitatively analyzed to develop themes that illustrate preservice science teachers’ understanding of equitable assessment. Results of the study showed that preservice teachers mostly emphasized fairness including fairness in grading and fairness in asking questions not out of covered concepts for equitable assessment. However, most of preservice teachers do not show an understanding of equity for providing equal opportunities for all students to display their understanding of related content. For some preservice teachers providing different opportunities (providing extra time for non-native speaking students) for some students seems to be unfair for other students and therefore, these kinds of refinements do not need to be used. The results of the study illustrated that preservice science teachers mostly understand equitable assessment as fairness and less highlight the role of using equitable assessment to support all student learning, which is more important in order to improve students’ achievement of science. Therefore, we recommend that more opportunities should be provided for preservice teachers engage in a more broad understanding of equitable assessment and learn how to use equitable assessment practices to aid and support all students learning trough classroom assessment.

Keywords: science teaching, equitable assessment, assessment literacy, preservice science teachers

Procedia PDF Downloads 281
29988 An Empirical Study of the International Financial Reporting Standards Education in the United States

Authors: Angela McCaskill

Abstract:

Accounting graduates in most United States universities are not being adequately taught International Financial Reporting Standards (IFRS). As such they are not prepared with the knowledge and skills necessary to remain competitive in international businesses. One of the reasons behind the ill preparation is the lack of specific international accounting instruction available in the U.S. This paper explores the importance of IFRS education through the lenses of graduate accounting majors. The paper specifically explores graduate accounting major’s preparedness in IFRS based on their recent completion of a Master in Accountancy degree where IFRS had been integrated. The data for the study was collected via face-to face and telephone/Skype interviews and questionnaires. After the interview the participants also agreed to answer two supplementary questions. The participants were to determine the amounts that should be reported on the balance sheet under (1) IFRS and (2) U.S. GAAP. These questions intended to test their knowledge of both sets of standards. The sample consisted of on-line and brick and mortar university students enrolled in their graduate program during the period from spring semester 2016 to summer semester 2016. This study shows that a separate course should be devoted to teaching IFRS and convergence related issues. There is a direct correlation between the knowledge level of those students taking an IFRS course and the successful completion of the supplementary questions compared to those who only had IFRS instruction mixed into their U.S. GAAP based instruction. Students who took an international accounting course were better prepared for the IFRS conversion than those who did not have a separate course. Academically, universities need to take a deeper look into the needs of their students and do better at incorporating international standards in their curriculum.

Keywords: accounting education, global accounting standards, international accounting, IFRS and U.S. GAAP convergence, IFRS, U.S. GAAP

Procedia PDF Downloads 230
29987 The Rigor and Relevance of the Mathematics Component of the Teacher Education Programmes in Jamaica: An Evaluative Approach

Authors: Avalloy McCarthy-Curvin

Abstract:

For over fifty years there has been widespread dissatisfaction with the teaching of Mathematics in Jamaica. Studies, done in the Jamaican context highlight that teachers at the end of training do not have a deep understanding of the mathematics content they teach. Little research has been done in the Jamaican context that targets the advancement of contextual knowledge on the problem to ultimately provide a solution. The aim of the study is to identify what influences this outcome of teacher education in Jamaica so as to remedy the problem. This study formatively evaluated the curriculum documents, assessments and the delivery of the curriculum that are being used in teacher training institutions in Jamaica to determine their rigor -the extent to which written document, instruction, and the assessments focused on enabling pre-service teachers to develop deep understanding of mathematics and relevance- the extent to which the curriculum document, instruction, and the assessments are focus on developing the requisite knowledge for teaching mathematics. The findings show that neither the curriculum document, instruction nor assessments ensure rigor and enable pre-service teachers to develop the knowledge and skills they need to teach mathematics effectively.

Keywords: relevance, rigor, deep understanding, formative evaluation

Procedia PDF Downloads 209
29986 Associated Map and Inter-Purchase Time Model for Multiple-Category Products

Authors: Ching-I Chen

Abstract:

The continued rise of e-commerce is the main driver of the rapid growth of global online purchase. Consumers can nearly buy everything they want at one occasion through online shopping. The purchase behavior models which focus on single product category are insufficient to describe online shopping behavior. Therefore, analysis of multi-category purchase gets more and more popular. For example, market basket analysis explores customers’ buying tendency of the association between product categories. The information derived from market basket analysis facilitates to make cross-selling strategies and product recommendation system. To detect the association between different product categories, we use the market basket analysis with the multidimensional scaling technique to build an associated map which describes how likely multiple product categories are bought at the same time. Besides, we also build an inter-purchase time model for associated products to describe how likely a product will be bought after its associated product is bought. We classify inter-purchase time behaviors of multi-category products into nine types, and use a mixture regression model to integrate those behaviors under our assumptions of purchase sequences. Our sample data is from comScore which provides a panelist-label database that captures detailed browsing and buying behavior of internet users across the United States. Finding the inter-purchase time from books to movie is shorter than the inter-purchase time from movies to books. According to the model analysis and empirical results, this research finally proposes the applications and recommendations in the management.

Keywords: multiple-category purchase behavior, inter-purchase time, market basket analysis, e-commerce

Procedia PDF Downloads 347
29985 English as a Medium of Instruction in Algerian Higher Business Degree Programmes

Authors: Sidi Ahmed Berrabah

Abstract:

English as a Medium of Instruction (EMI) is expanding rapidly in the world. A growing volume of research has been dedicated to investigating its introduction, with findings that describe a complex picture and suggest that the practicality and effectiveness of EMI are still the subjects of debate. However, considerably less attention has been given to understanding EMI in a context where its introduction has been discussed but not yet put into practice. One such context is Algeria, where discourses about a potential introduction of EMI have been going on for some time. It is likely that the first courses where EMI is introduced are Business degree programmes. This study aims to examine the current discourses and attitudes towards the potential implementation of EMI and the language practices in Business degree programmes in three Algerian universities. The research is conducted in three different universities in three different regions in Algeria with the aim of including both ‘centre’ and ‘periphery’ Algerian universities. In order to achieve the previous aims, a mixed research paradigm is used. Questionnaires, semi structured interviews, and classroom observations are used to gather data from three participant cohorts: university students of Business, lecturers of Business, and lecturers of English for specific purposes. The findings showed that students and lecturers of Business are found in favour of the introduction of English instead of French or standard Arabic as a medium of instruction. The reason is that English is seen as having internationalisation and instrumental benefits, while French was too closely linked to the colonial history of the country. The favourable attitudes towards EMI, however, seem to contrast with the daily classroom practices at the departments of Business studies, where students and lecturers make practical choices of using their language repertoire based on their linguistic background and skills. Classrooms in the three Algerian universities featured fluid and translanguaging practices that cannot be reduced to a monolingual EMI policy.

Keywords: EMI, Algerian universities, business degree programmes, translanguaging

Procedia PDF Downloads 176
29984 Correlation between Dynamic Knee Valgus with Isometric Hip Abductors Strength during Single-Leg Landing

Authors: Ahmed Fawzy, Khaled Ayad, Gh. M. Koura, W. Reda

Abstract:

The knee joint complex is one of the most commonly injured areas of the body in athletes. Excessive frontal plane knee excursion is considered a risk factor for multiple knee pathologies such as anterior cruciate ligament and patellofemoral joint injuries, however, little is known about the biomechanical factors that contribute to this loading pattern. Objectives: The purpose of this study was to investigate if there is a relationship between hip abductors isometric strength and the value of FPPA during single leg landing tasks in normal male subjects. Methods: One hundred (male) subjects free from lower extremity injuries for at least six months ago participated in this study. Their mean age was (23.25 ± 2.88) years, mean weight was (74.76 ± 13.54) (Kg), mean height was (174.23 ± 6.56) (Cm). The knee frontal plane projection angle was measured by digital video camera using single leg landing task. Hip abductors isometric strength were assessed by portable hand-held dynamometer. Muscle strength had been normalized to the body weight to obtain more accurate measurements. Results: The results demonstrated that there was no significant relationship between hip abductors isometric strength and the value of FPPA during single leg landing tasks in normal male subjects. Conclusion: It can be concluded that there is no relationship between hip abductors isometric strength and the value of FPPA during functional activities in normal male subjects.

Keywords: 2-dimensional motion analysis, hip strength, kinematics, knee injuries

Procedia PDF Downloads 225