Search results for: count data
25208 Assessment of the Implementation of Recommended Teaching and Evaluation Methods of NCE Arabic Language Curriculum in Colleges of Education in North Western Nigeria
Authors: Hamzat Shittu Atunnise
Abstract:
This study on Assessment of the Implementation of Recommended Teaching and Evaluation Methods of the Nigeria Certificate in Education (NCE) Arabic Language Curriculum in Colleges of Education in North Western Nigeria was conducted with four objectives, four research questions and four null hypotheses. Descriptive survey design was used and the multistage sampling procedure adopted. Frequency count and percentage were used to answer research questions and chi-square was used to test all the null hypotheses at an Alpha 0.05 level of significance. Two hundred and ninety one subjects were drawn as sample. Questionnaires were used for data collection. The Context, Input, Process and Product (CIPP) model of evaluation was employed. The study findings indicated that: there were no significant difference in the perceptions of lecturers and students from Federal and State Colleges of Education on the following: extent of which lecturers employ appropriate methods in teaching the language and extent of which recommended evaluation methods are utilized for the implementation of Arabic Curriculum. Based on these findings, it was recommended among other things that: lecturers should adopt teaching methodologies that promote interactive learning; Governments should ensure that information and communication technology facilities are made available and usable in all Colleges of Education; Lecturers should vary their evaluation methods because other methods of evaluation can meet and surpass the level of learning and understanding which essay type questions are believed to create and that language labs should be used in teaching Arabic in Colleges of Education because comprehensive language learning is possible through both classroom and language lab teaching.Keywords: assessment, arabic language, curriculum, methods of teaching, evaluation methods, NCE
Procedia PDF Downloads 5925207 Strengthening Legal Protection of Personal Data through Technical Protection Regulation in Line with Human Rights
Authors: Tomy Prihananto, Damar Apri Sudarmadi
Abstract:
Indonesia recognizes the right to privacy as a human right. Indonesia provides legal protection against data management activities because the protection of personal data is a part of human rights. This paper aims to describe the arrangement of data management and data management in Indonesia. This paper is a descriptive research with qualitative approach and collecting data from literature study. Results of this paper are comprehensive arrangement of data that have been set up as a technical requirement of data protection by encryption methods. Arrangements on encryption and protection of personal data are mutually reinforcing arrangements in the protection of personal data. Indonesia has two important and immediately enacted laws that provide protection for the privacy of information that is part of human rights.Keywords: Indonesia, protection, personal data, privacy, human rights, encryption
Procedia PDF Downloads 18225206 Quinoa Choux Cream Gluten Free
Authors: Autumporn Buranapongphan, Ketsirin Meethong, Phukan Pahaphom
Abstract:
The objectives of this research is aim to study the standard formula of choux cream recipe. Formulation of choux cream were used gluten free as a replacer with flour in choux dough, quinoa milk in cream and shelf life on product. The results showed the acceptance test using 30 target consumers revealed that liking of choux dough with water 34%, egg 30% flour 19% butter 16% baking powder 1% and cream with milk 68% sugar 13% butter 6.8% egg 4.5% and vanilla 0.9%. The gluten free exhibited the formulation of dough is rice flour 12% potato starch 26% tapioca 7.7% and quinoa flour 4.3%. The ratio of corn flour at 40% had significant effects on liking of viscosity for quinoa cream. During storage by Total viable count (TVA) were kept in room temperature for 8 hours and chilled for 18 hours.Keywords: choux cream, gluten free, quinoa, dough
Procedia PDF Downloads 39825205 Efficient Passenger Counting in Public Transport Based on Machine Learning
Authors: Chonlakorn Wiboonsiriruk, Ekachai Phaisangittisagul, Chadchai Srisurangkul, Itsuo Kumazawa
Abstract:
Public transportation is a crucial aspect of passenger transportation, with buses playing a vital role in the transportation service. Passenger counting is an essential tool for organizing and managing transportation services. However, manual counting is a tedious and time-consuming task, which is why computer vision algorithms are being utilized to make the process more efficient. In this study, different object detection algorithms combined with passenger tracking are investigated to compare passenger counting performance. The system employs the EfficientDet algorithm, which has demonstrated superior performance in terms of speed and accuracy. Our results show that the proposed system can accurately count passengers in varying conditions with an accuracy of 94%.Keywords: computer vision, object detection, passenger counting, public transportation
Procedia PDF Downloads 15425204 The Various Legal Dimensions of Genomic Data
Authors: Amy Gooden
Abstract:
When human genomic data is considered, this is often done through only one dimension of the law, or the interplay between the various dimensions is not considered, thus providing an incomplete picture of the legal framework. This research considers and analyzes the various dimensions in South African law applicable to genomic sequence data – including property rights, personality rights, and intellectual property rights. The effective use of personal genomic sequence data requires the acknowledgement and harmonization of the rights applicable to such data.Keywords: artificial intelligence, data, law, genomics, rights
Procedia PDF Downloads 13825203 Big Brain: A Single Database System for a Federated Data Warehouse Architecture
Authors: X. Gumara Rigol, I. Martínez de Apellaniz Anzuola, A. Garcia Serrano, A. Franzi Cros, O. Vidal Calbet, A. Al Maruf
Abstract:
Traditional federated architectures for data warehousing work well when corporations have existing regional data warehouses and there is a need to aggregate data at a global level. Schibsted Media Group has been maturing from a decentralised organisation into a more globalised one and needed to build both some of the regional data warehouses for some brands at the same time as the global one. In this paper, we present the architectural alternatives studied and why a custom federated approach was the notable recommendation to go further with the implementation. Although the data warehouses are logically federated, the implementation uses a single database system which presented many advantages like: cost reduction and improved data access to global users allowing consumers of the data to have a common data model for detailed analysis across different geographies and a flexible layer for local specific needs in the same place.Keywords: data integration, data warehousing, federated architecture, Online Analytical Processing (OLAP)
Procedia PDF Downloads 23625202 Regularity and Maximal Congruence in Transformation Semigroups with Fixed Sets
Authors: Chollawat Pookpienlert, Jintana Sanwong
Abstract:
An element a of a semigroup S is called left (right) regular if there exists x in S such that a=xa² (a=a²x) and said to be intra-regular if there exist u,v in such that a=ua²v. Let T(X) be the semigroup of all full transformations on a set X under the composition of maps. For a fixed nonempty subset Y of X, let Fix(X,Y)={α ™ T(X) : yα=y for all y ™ Y}, where yα is the image of y under α. Then Fix(X,Y) is a semigroup of full transformations on X which fix all elements in Y. Here, we characterize left regular, right regular and intra-regular elements of Fix(X,Y) which characterizations are shown as follows: For α ™ Fix(X,Y), (i) α is left regular if and only if Xα\Y = Xα²\Y, (ii) α is right regular if and only if πα = πα², (iii) α is intra-regular if and only if | Xα\Y | = | Xα²\Y | such that Xα = {xα : x ™ X} and πα = {xα⁻¹ : x ™ Xα} in which xα⁻¹ = {a ™ X : aα=x}. Moreover, those regularities are equivalent if Xα\Y is a finite set. In addition, we count the number of those elements of Fix(X,Y) when X is a finite set. Finally, we determine the maximal congruence ρ on Fix(X,Y) when X is finite and Y is a nonempty proper subset of X. If we let | X \Y | = n, then we obtain that ρ = (Fixn x Fixn) ∪ (H ε x H ε) where Fixn = {α ™ Fix(X,Y) : | Xα\Y | < n} and H ε is the group of units of Fix(X,Y). Furthermore, we show that the maximal congruence is unique.Keywords: intra-regular, left regular, maximal congruence, right regular, transformation semigroup
Procedia PDF Downloads 22925201 Solid State Fermentation: A Technological Alternative for Enriching Bioavailability of Underutilized Crops
Authors: Vipin Bhandari, Anupama Singh, Kopal Gupta
Abstract:
Solid state fermentation, an eminent bioconversion technique for converting many biological substrates into a value-added product, has proven its role in the biotransformation of crops by nutritionally enriching them. Hence, an effort was made for nutritional enhancement of underutilized crops viz. barnyard millet, amaranthus and horse gram based composite flour using SSF. The grains were given pre-treatments before fermentation and these pre-treatments proved quite effective in diminishing the level of antinutrients in grains and in improving their nutritional characteristics. The present study deals with the enhancement of nutritional characteristics of underutilized crops viz. barnyard millet, amaranthus and horsegram based composite flour using solid state fermentation (SSF) as the principle bioconversion technique to convert the composite flour substrate into a nutritionally enriched value added product. Response surface methodology was used to design the experiments. The variables selected for the fermentation experiments were substrate particle size, substrate blend ratio, fermentation time, fermentation temperature and moisture content having three levels of each. Seventeen designed experiments were conducted randomly to find the effect of these variables on microbial count, reducing sugar, pH, total sugar, phytic acid and water absorption index. The data from all experiments were analyzed using Design Expert 8.0.6 and the response functions were developed using multiple regression analysis and second order models were fitted for each response. Results revealed that pretreatments proved quite handful in diminishing the level of antinutrients and thus enhancing the nutritional value of the grains appreciably, for instance, there was about 23% reduction in phytic acid levels after decortication of barnyard millet. The carbohydrate content of the decorticated barnyard millet increased to 81.5% from initial value of 65.2%. Similarly popping and puffing of horsegram and amaranthus respectively greatly reduced the trypsin inhibitor activity. Puffing of amaranthus also reduced the tannin content appreciably. Bacillus subtilis was used as the inoculating specie since it is known to produce phytases in solid state fermentation systems. These phytases remarkably reduce the phytic acid content which acts as a major antinutritional factor in food grains. Results of solid state fermentation experiments revealed that phytic acid levels reduced appreciably when fermentation was allowed to continue for 72 hours at a temperature of 35°C. Particle size and substrate blend ratio also affected the responses positively. All the parameters viz. substrate particle size, substrate blend ratio, fermentation time, fermentation temperature and moisture content affected the responses namely microbial count, reducing sugar, pH, total sugar, phytic acid and water absorption index but the effect of fermentation time was found to be most significant on all the responses. Statistical analysis resulted in the optimum conditions (particle size 355µ, substrate blend ratio 50:20:30 of barnyard millet, amaranthus and horsegram respectively, fermentation time 68 hrs, fermentation temperature 35°C and moisture content 47%) for maximum reduction in phytic acid. The model F- value was found to be highly significant at 1% level of significance in case of all the responses. Hence, second order model could be fitted to predict all the dependent parameters. The effect of fermentation time was found to be most significant as compared to other variables.Keywords: composite flour, solid state fermentation, underutilized crops, cereals, fermentation technology, food processing
Procedia PDF Downloads 32725200 Estimation of Carbon Losses in Rice: Wheat Cropping System of Punjab, Pakistan
Authors: Saeed Qaisrani
Abstract:
The study was conducted to observe carbon and nutrient loss by burning of rice residues on rice-wheat cropping system The rice crop was harvested to conduct the experiment in a randomized complete block design (RCBD) with factors and 4 replications with a net plot size of 10 m x 20 m. Rice stubbles were managed by two methods i.e. Incorporation & burning of rice residues. Soil samples were taken to a depth of 30 cm before sowing & after harvesting of wheat. Wheat was sown after harvesting of rice by three practices i.e. Conventional tillage, Minimum tillage and Zero tillage to observe best tillage practices. Laboratory and field experiments were conducted on wheat to assess best tillage practice and residues management method with estimation of carbon losses. Data on the following parameters; establishment count, plant height, spike length, number of grains per spike, biological yield, fat content, carbohydrate content, protein content, and harvest index were recorded to check wheat quality & ensuring food security in the region. Soil physico-chemical analysis i.e. pH, electrical conductivity, organic matter, nitrogen, phosphorus, potassium, and carbon were done in soil fertility laboratory. Substantial results were found on growth, yield and related parameters of wheat crop. The collected data were examined statistically with economic analysis to estimate the cost-benefit ratio of using different tillage techniques and residue management practices. Obtained results depicted that Zero tillage method have positive impacts on growth, yield and quality of wheat, Moreover, it is cost effective methodology. Similarly, Incorporation is suitable and beneficial method for soil due to more nutrients provision and reduce the need of fertilizers. Burning of rice stubbles has negative impact including air pollution, nutrient loss, microbes died and carbon loss. Recommended the zero tillage technology to reduce carbon losses along with food security in Pakistan.Keywords: agricultural agronomy, food security, carbon sequestration, rice-wheat cropping system
Procedia PDF Downloads 27725199 A Review Paper on Data Mining and Genetic Algorithm
Authors: Sikander Singh Cheema, Jasmeen Kaur
Abstract:
In this paper, the concept of data mining is summarized and its one of the important process i.e KDD is summarized. The data mining based on Genetic Algorithm is researched in and ways to achieve the data mining Genetic Algorithm are surveyed. This paper also conducts a formal review on the area of data mining tasks and genetic algorithm in various fields.Keywords: data mining, KDD, genetic algorithm, descriptive mining, predictive mining
Procedia PDF Downloads 59125198 Data-Mining Approach to Analyzing Industrial Process Information for Real-Time Monitoring
Authors: Seung-Lock Seo
Abstract:
This work presents a data-mining empirical monitoring scheme for industrial processes with partially unbalanced data. Measurement data of good operations are relatively easy to gather, but in unusual special events or faults it is generally difficult to collect process information or almost impossible to analyze some noisy data of industrial processes. At this time some noise filtering techniques can be used to enhance process monitoring performance in a real-time basis. In addition, pre-processing of raw process data is helpful to eliminate unwanted variation of industrial process data. In this work, the performance of various monitoring schemes was tested and demonstrated for discrete batch process data. It showed that the monitoring performance was improved significantly in terms of monitoring success rate of given process faults.Keywords: data mining, process data, monitoring, safety, industrial processes
Procedia PDF Downloads 40125197 On Direct Matrix Factored Inversion via Broyden's Updates
Authors: Adel Mohsen
Abstract:
A direct method based on the good Broyden's updates for evaluating the inverse of a nonsingular square matrix of full rank and solving related system of linear algebraic equations is studied. For a matrix A of order n whose LU-decomposition is A = LU, the multiplication count is O (n3). This includes the evaluation of the LU-decompositions of the inverse, the lower triangular decomposition of A as well as a “reduced matrix inverse”. If an explicit value of the inverse is not needed the order reduces to O (n3/2) to compute to compute inv(U) and the reduced inverse. For a symmetric matrix only O (n3/3) operations are required to compute inv(L) and the reduced inverse. An example is presented to demonstrate the capability of using the reduced matrix inverse in treating ill-conditioned systems. Besides the simplicity of Broyden's update, the method provides a mean to exploit the possible sparsity in the matrix and to derive a suitable preconditioner.Keywords: Broyden's updates, matrix inverse, inverse factorization, solution of linear algebraic equations, ill-conditioned matrices, preconditioning
Procedia PDF Downloads 47925196 Implementation of Inclusive Education in DepEd-Dasmarinas: Basis for Inclusion Program Framework
Authors: Manuela S. Tolentino, John G. Nepomuceno
Abstract:
The purpose of this investigation was to assess the implementation of inclusive education (IE) in 6 elementary and 5 secondary public schools in the City Schools Division of Dasmarinas. Participants in this study were 11 school heads, 73 teachers, 22 parents and 22 students (regular and with special needs) who were selected using purposive sampling. A 30-item questionnaire was used to gather data on the extent of the implementation of IE in the division while focus group discussion (FGD) was used to gather insights on what facilitate and hinder the implementation of the IE program. This study assessed the following variables: school culture and environment, inclusive education policy implementation, and curriculum design and practices. Data were analyzed using frequency count, mean and ranking. Results revealed that participants have similar assessment on the extent of the implementation of IE. School heads rated school culture and environment as highest in terms of implementation while teachers and pupils chose curriculum design and practices. On the other hand, parents felt that inclusive education policies are implemented best. School culture and environment are given high ratings. Participants perceived that the IE program in the division is making everyone feel welcome regardless of age, sex, social status, physical, mental and emotional state; students with or without disability are equally valued, and students help each. However, some aspects of the IE program implementation are given low ratings namely: partnership between staff, parents and caregivers, school’s effort to minimize discriminatory practice, and stakeholders sharing the philosophy of inclusion. As regards education policy implementation, indicators with the highest ranks were school’s effort to admit students from the locality especially students with special needs, and the implementation of the child protection policy and anti-bullying policy. The results of the FGD revealed that both school heads and teachers possessed the welcoming gesture to accommodate students with special needs. This can be linked to the increasing enrolment of SNE in the division. However, limitations of the teachers’ knowledge on handling learners, facilities and collaboration among stakeholders hinder the implementation of IE program. Based on the findings, inclusion program framework was developed for program enhancement. This will be the basis for the improvement of the program’s efficiency, the relationship between stakeholders, and formulation of solutions.Keywords: inclusion, inclusive education, framework, special education
Procedia PDF Downloads 17125195 On the Use of Analytical Performance Models to Design a High-Performance Active Queue Management Scheme
Authors: Shahram Jamali, Samira Hamed
Abstract:
One of the open issues in Random Early Detection (RED) algorithm is how to set its parameters to reach high performance for the dynamic conditions of the network. Although original RED uses fixed values for its parameters, this paper follows a model-based approach to upgrade performance of the RED algorithm. It models the routers queue behavior by using the Markov model and uses this model to predict future conditions of the queue. This prediction helps the proposed algorithm to make some tunings over RED's parameters and provide efficiency and better performance. Widespread packet level simulations confirm that the proposed algorithm, called Markov-RED, outperforms RED and FARED in terms of queue stability, bottleneck utilization and dropped packets count.Keywords: active queue management, RED, Markov model, random early detection algorithm
Procedia PDF Downloads 53925194 A Survey of Semantic Integration Approaches in Bioinformatics
Authors: Chaimaa Messaoudi, Rachida Fissoune, Hassan Badir
Abstract:
Technological advances of computer science and data analysis are helping to provide continuously huge volumes of biological data, which are available on the web. Such advances involve and require powerful techniques for data integration to extract pertinent knowledge and information for a specific question. Biomedical exploration of these big data often requires the use of complex queries across multiple autonomous, heterogeneous and distributed data sources. Semantic integration is an active area of research in several disciplines, such as databases, information-integration, and ontology. We provide a survey of some approaches and techniques for integrating biological data, we focus on those developed in the ontology community.Keywords: biological ontology, linked data, semantic data integration, semantic web
Procedia PDF Downloads 44925193 Incidence and Predictors of Mortality Among HIV Positive Children on Art in Public Hospitals of Harer Town, Enrolled From 2011 to 2021
Authors: Getahun Nigusie
Abstract:
Background; antiretroviral treatment reduce HIV-related morbidity, and prolonged survival of patients however, there is lack of up-to-date information concerning the treatment long term effect on the survival of HIV positive children especially in the study area. Objective: To assess incidence and predictors of mortality among HIV positive children on ART in public hospitals of Harer town who were enrolled from 2011 to 2021. Methodology: Institution based retrospective cohort study was conducted among 429 HIV positive children enrolled in ART clinic from January 1st 2011 to December30th 2021. Data were collected from medical cards by using a data extraction form, Descriptive analyses were used to Summarized the results, and life table was used to estimate survival probability at specific point of time after introduction of ART. Kaplan Meier survival curve together with log rank test was used to compare survival between different categories of covariates, and Multivariate Cox-proportional hazard regression model was used to estimate adjusted Hazard rate. Variables with p-values ≤0.25 in bivariable analysis were candidates to the multivariable analysis. Finally, variables with p-values < 0.05 were considered as significant variables. Results: The study participants had followed for a total of 2549.6 child-years (30596 child months) with an overall mortality rate of 1.5 (95% CI: 1.1, 2.04) per 100 child-years. Their median survival time was 112 months (95% CI: 101–117). There were 38 children with unknown outcome, 39 deaths, and 55 children transfer out to different facility. The overall survival at 6, 12, 24, 48 months were 98%, 96%, 95%, 94% respectively. being in WHO clinical Stage four (AHR=4.55, 95% CI:1.36, 15.24), having anemia(AHR=2.56, 95% CI:1.11, 5.93), baseline low absolute CD4 count (AHR=2.95, 95% CI: 1.22, 7.12), stunting (AHR=4.1, 95% CI: 1.11, 15.42), wasting (AHR=4.93, 95% CI: 1.31, 18.76), poor adherence to treatment (AHR=3.37, 95% CI: 1.25, 9.11), having TB infection at enrollment (AHR=3.26, 95% CI: 1.25, 8.49),and no history of change their regimen(AHR=7.1, 95% CI: 2.74, 18.24), were independent predictors of death. Conclusion: more than half of death occurs within 2 years. Prevalent tuberculosis, anemia, wasting, and stunting nutritional status, socioeconomic factors, and baseline opportunistic infection were independent predictors of death. Increasing early screening and managing those predictors are required.Keywords: human immunodeficiency virus-positive children, anti-retroviral therapy, survival, Ethiopia
Procedia PDF Downloads 2225192 Classification of Generative Adversarial Network Generated Multivariate Time Series Data Featuring Transformer-Based Deep Learning Architecture
Authors: Thrivikraman Aswathi, S. Advaith
Abstract:
As there can be cases where the use of real data is somehow limited, such as when it is hard to get access to a large volume of real data, we need to go for synthetic data generation. This produces high-quality synthetic data while maintaining the statistical properties of a specific dataset. In the present work, a generative adversarial network (GAN) is trained to produce multivariate time series (MTS) data since the MTS is now being gathered more often in various real-world systems. Furthermore, the GAN-generated MTS data is fed into a transformer-based deep learning architecture that carries out the data categorization into predefined classes. Further, the model is evaluated across various distinct domains by generating corresponding MTS data.Keywords: GAN, transformer, classification, multivariate time series
Procedia PDF Downloads 13025191 Generative AI: A Comparison of Conditional Tabular Generative Adversarial Networks and Conditional Tabular Generative Adversarial Networks with Gaussian Copula in Generating Synthetic Data with Synthetic Data Vault
Authors: Lakshmi Prayaga, Chandra Prayaga. Aaron Wade, Gopi Shankar Mallu, Harsha Satya Pola
Abstract:
Synthetic data generated by Generative Adversarial Networks and Autoencoders is becoming more common to combat the problem of insufficient data for research purposes. However, generating synthetic data is a tedious task requiring extensive mathematical and programming background. Open-source platforms such as the Synthetic Data Vault (SDV) and Mostly AI have offered a platform that is user-friendly and accessible to non-technical professionals to generate synthetic data to augment existing data for further analysis. The SDV also provides for additions to the generic GAN, such as the Gaussian copula. We present the results from two synthetic data sets (CTGAN data and CTGAN with Gaussian Copula) generated by the SDV and report the findings. The results indicate that the ROC and AUC curves for the data generated by adding the layer of Gaussian copula are much higher than the data generated by the CTGAN.Keywords: synthetic data generation, generative adversarial networks, conditional tabular GAN, Gaussian copula
Procedia PDF Downloads 8225190 The Application of Lesson Study Model in Writing Review Text in Junior High School
Authors: Sulastriningsih Djumingin
Abstract:
This study has some objectives. It aims at describing the ability of the second-grade students to write review text without applying the Lesson Study model at SMPN 18 Makassar. Second, it seeks to describe the ability of the second-grade students to write review text by applying the Lesson Study model at SMPN 18 Makassar. Third, it aims at testing the effectiveness of the Lesson Study model in writing review text at SMPN 18 Makassar. This research was true experimental design with posttest Only group design involving two groups consisting of one class of the control group and one class of the experimental group. The research populations were all the second-grade students at SMPN 18 Makassar amounted to 250 students consisting of 8 classes. The sampling technique was purposive sampling technique. The control class was VIII2 consisting of 30 students, while the experimental class was VIII8 consisting of 30 students. The research instruments were in the form of observation and tests. The collected data were analyzed using descriptive statistical techniques and inferential statistical techniques with t-test types processed using SPSS 21 for windows. The results shows that: (1) of 30 students in control class, there are only 14 (47%) students who get the score more than 7.5, categorized as inadequate; (2) in the experimental class, there are 26 (87%) students who obtain the score of 7.5, categorized as adequate; (3) the Lesson Study models is effective to be applied in writing review text. Based on the comparison of the ability of the control class and experimental class, it indicates that the value of t-count is greater than the value of t-table (2.411> 1.667). It means that the alternative hypothesis (H1) proposed by the researcher is accepted.Keywords: application, lesson study, review text, writing
Procedia PDF Downloads 20225189 The Appropriate Number of Test Items That a Classroom-Based Reading Assessment Should Include: A Generalizability Analysis
Authors: Jui-Teng Liao
Abstract:
The selected-response (SR) format has been commonly adopted to assess academic reading in both formal and informal testing (i.e., standardized assessment and classroom assessment) because of its strengths in content validity, construct validity, as well as scoring objectivity and efficiency. When developing a second language (L2) reading test, researchers indicate that the longer the test (e.g., more test items) is, the higher reliability and validity the test is likely to produce. However, previous studies have not provided specific guidelines regarding the optimal length of a test or the most suitable number of test items or reading passages. Additionally, reading tests often include different question types (e.g., factual, vocabulary, inferential) that require varying degrees of reading comprehension and cognitive processes. Therefore, it is important to investigate the impact of question types on the number of items in relation to the score reliability of L2 reading tests. Given the popularity of the SR question format and its impact on assessment results on teaching and learning, it is necessary to investigate the degree to which such a question format can reliably measure learners’ L2 reading comprehension. The present study, therefore, adopted the generalizability (G) theory to investigate the score reliability of the SR format in L2 reading tests focusing on how many test items a reading test should include. Specifically, this study aimed to investigate the interaction between question types and the number of items, providing insights into the appropriate item count for different types of questions. G theory is a comprehensive statistical framework used for estimating the score reliability of tests and validating their results. Data were collected from 108 English as a second language student who completed an English reading test comprising factual, vocabulary, and inferential questions in the SR format. The computer program mGENOVA was utilized to analyze the data using multivariate designs (i.e., scenarios). Based on the results of G theory analyses, the findings indicated that the number of test items had a critical impact on the score reliability of an L2 reading test. Furthermore, the findings revealed that different types of reading questions required varying numbers of test items for reliable assessment of learners’ L2 reading proficiency. Further implications for teaching practice and classroom-based assessments are discussed.Keywords: second language reading assessment, validity and reliability, Generalizability theory, Academic reading, Question format
Procedia PDF Downloads 8725188 Radiation Protection Assessment of the Emission of a d-t Neutron Generator: Simulations with MCNP Code and Experimental Measurements in Different Operating Conditions
Authors: G. M. Contessa, L. Lepore, G. Gandolfo, C. Poggi, N. Cherubini, R. Remetti, S. Sandri
Abstract:
Practical guidelines are provided in this work for the safe use of a portable d-t Thermo Scientific MP-320 neutron generator producing pulsed 14.1 MeV neutron beams. The neutron generator’s emission was tested experimentally and reproduced by MCNPX Monte Carlo code. Simulations were particularly accurate, even generator’s internal components were reproduced on the basis of ad-hoc collected X-ray radiographic images. Measurement campaigns were conducted under different standard experimental conditions using an LB 6411 neutron detector properly calibrated at three different energies, and comparing simulated and experimental data. In order to estimate the dose to the operator vs. the operating conditions and the energy spectrum, the most appropriate value of the conversion factor between neutron fluence and ambient dose equivalent has been identified, taking into account both direct and scattered components. The results of the simulations show that, in real situations, when there is no information about the neutron spectrum at the point where the dose has to be evaluated, it is possible - and in any case conservative - to convert the measured value of the count rate by means of the conversion factor corresponding to 14 MeV energy. This outcome has a general value when using this type of generator, enabling a more accurate design of experimental activities in different setups. The increasingly widespread use of this type of device for industrial and medical applications makes the results of this work of interest in different situations, especially as a support for the definition of appropriate radiation protection procedures and, in general, for risk analysis.Keywords: instrumentation and monitoring, management of radiological safety, measurement of individual dose, radiation protection of workers
Procedia PDF Downloads 13225187 A Privacy Protection Scheme Supporting Fuzzy Search for NDN Routing Cache Data Name
Authors: Feng Tao, Ma Jing, Guo Xian, Wang Jing
Abstract:
Named Data Networking (NDN) replaces IP address of traditional network with data name, and adopts dynamic cache mechanism. In the existing mechanism, however, only one-to-one search can be achieved because every data has a unique name corresponding to it. There is a certain mapping relationship between data content and data name, so if the data name is intercepted by an adversary, the privacy of the data content and user’s interest can hardly be guaranteed. In order to solve this problem, this paper proposes a one-to-many fuzzy search scheme based on order-preserving encryption to reduce the query overhead by optimizing the caching strategy. In this scheme, we use hash value to ensure the user’s query safe from each node in the process of search, so does the privacy of the requiring data content.Keywords: NDN, order-preserving encryption, fuzzy search, privacy
Procedia PDF Downloads 48425186 Healthcare Big Data Analytics Using Hadoop
Authors: Chellammal Surianarayanan
Abstract:
Healthcare industry is generating large amounts of data driven by various needs such as record keeping, physician’s prescription, medical imaging, sensor data, Electronic Patient Record(EPR), laboratory, pharmacy, etc. Healthcare data is so big and complex that they cannot be managed by conventional hardware and software. The complexity of healthcare big data arises from large volume of data, the velocity with which the data is accumulated and different varieties such as structured, semi-structured and unstructured nature of data. Despite the complexity of big data, if the trends and patterns that exist within the big data are uncovered and analyzed, higher quality healthcare at lower cost can be provided. Hadoop is an open source software framework for distributed processing of large data sets across clusters of commodity hardware using a simple programming model. The core components of Hadoop include Hadoop Distributed File System which offers way to store large amount of data across multiple machines and MapReduce which offers way to process large data sets with a parallel, distributed algorithm on a cluster. Hadoop ecosystem also includes various other tools such as Hive (a SQL-like query language), Pig (a higher level query language for MapReduce), Hbase(a columnar data store), etc. In this paper an analysis has been done as how healthcare big data can be processed and analyzed using Hadoop ecosystem.Keywords: big data analytics, Hadoop, healthcare data, towards quality healthcare
Procedia PDF Downloads 41325185 The Theory of Number "0"
Authors: Iryna Shevchenko
Abstract:
The science of mathematics was originated at the order of count of objects and subsequently for the measurement of size and quality of objects using the logical or abstract means. The laws of mathematics are based on the study of absolute values. The number 0 or "nothing" is the purely logical (as the opposite to absolute) value as the "nothing" should always assume the space for the something that had existed there; otherwise the "something" would never come to existence. In this work we are going to prove that the number "0" is the abstract (logical) and not an absolute number and it has the absolute value of “∞” (infinity). Therefore, the number "0" might not stand in the row of numbers that symbolically represents the absolute values, as it would be the mathematically incorrect. The symbolical value of number "0" in the row of numbers could be represented with symbol "∞" (infinity). As a result, we have the mathematical row of numbers: epsilon, ...4, 3, 2, 1, ∞. As the conclusions of the theory of number “0” we presented the statements: multiplication and division by fractions of numbers is illegal operation and the mathematical division by number “0” is allowed.Keywords: illegal operation of division and multiplication by fractions of number, infinity, mathematical row of numbers, theory of number “0”
Procedia PDF Downloads 55225184 Composition, Abundance and Diversity of Zooplankton in Sarangani Bay, Sarangani Province, Philippines
Authors: Jeter Canete, Noreen Joyce Estrella, Yedda Sachi Patrice Madelo
Abstract:
Zooplankton plays a crucial role in aquatic ecosystems and a number of water parameters involved in it. Despite their relevance, there is inadequate information about zooplankton communities in Sarangani Bay, Sarangani Province: one of the most essential waterbodies in Mindanao. The aim of the present study was to determine the composition, abundance, and diversity of zooplankton as well as to provide more recent data about the physico-chemical characteristics of Sarangani Bay. Zooplankton samples were collected by vertical hauls using a zooplankton net (mouth diameter: 0.5m; mesh size opening: round, 350μm) in three stations in the coastal waters of Alabel, Malapatan, and Maasim during November 2018. A total of 74 species of zooplankton belonging mainly to Kingdom Protozoa, Phylum Arthropoda, Chaetognatha, and Chordata were identified. Results showed a total zooplankton abundance of 1,984,166 ind/m³ with the highest count recorded at Malapatan (717,169 ind/m³) and the lowest at Maasim (624,411 ind/m³). Among 22 zooplankton groups identified, subclass Copepoda was found to be the most dominant (73.10%), followed by Appendicularia (12.18%) and Vertebrata (3.54%). Diversity analysis revealed an even distribution of species and a diverse ecosystem in all stations sampled. Correlation analysis indicated a strong relationship between zooplankton abundance and physico-chemical parameters. Overall, the physico-chemical profile of Sarangani Bay did not differ from the standards set by DENR, and analysis of the zooplankton communities revealed that Sarangani Bay favorably supports marine organisms to flourish. The findings of this study provide useful knowledge on zooplankton communities and can be used to create management strategies to protect the aquatic biodiversity in Sarangani Bay.Keywords: aquatic biomonitoring, biodiversity, physicochemical analysis, population survey, Sarangani Bay, Sarangani Province, zooplankton
Procedia PDF Downloads 32825183 Exploration of Various Metrics for Partitioning of Cellular Automata Units for Efficient Reconfiguration of Field Programmable Gate Arrays (FPGAs)
Authors: Peter Tabatt, Christian Siemers
Abstract:
Using FPGA devices to improve the behavior of time-critical parts of embedded systems is a proven concept for years. With reconfigurable FPGA devices, the logical blocks can be partitioned and grouped into static and dynamic parts. The dynamic parts can be reloaded 'on demand' at runtime. This work uses cellular automata, which are constructed through compilation from (partially restricted) ANSI-C sources, to determine the suitability of various metrics for optimal partitioning. Significant metrics, in this case, are for example the area on the FPGA device for the partition, the pass count for loop constructs and communication characteristics to other partitions. With successful partitioning, it is possible to use smaller FPGA devices for the same requirements as with not reconfigurable FPGA devices or – vice versa – to use the same FPGAs for larger programs.Keywords: reconfigurable FPGA, cellular automata, partitioning, metrics, parallel computing
Procedia PDF Downloads 27225182 Analysis of Shrinkage Effect during Mercerization on Himalayan Nettle, Cotton and Cotton/Nettle Yarn Blends
Authors: Reena Aggarwal, Neha Kestwal
Abstract:
The Himalayan Nettle (Girardinia diversifolia) has been used for centuries as fibre and food source by Himalayan communities. Himalayan Nettle is a natural cellulosic fibre that can be handled in the same way as other cellulosic fibres. The Uttarakhand Bamboo and Fibre Development Board based in Uttarakhand, India is working extensively with the nettle fibre to explore the potential of nettle for textile production in the region. The fiber is a potential resource for rural enterprise development for some high altitude pockets of the state and traditionally the plant fibre is used for making domestic products like ropes and sacks. Himalayan Nettle is an unconventional natural fiber with functional characteristics of shrink resistance, degree of pathogen and fire resistance and can blend nicely with other fibres. Most importantly, they generate mainly organic wastes and leave residues that are 100% biodegradable. The fabrics may potentially be reused or re-manufactured and can also be used as a source of cellulose feedstock for regenerated cellulosic products. Being naturally bio- degradable, the fibre can be composted if required. Though a lot of research activities and training are directed towards fibre extraction and processing techniques in different craft clusters villagers of different clusters of Uttarkashi, Chamoli and Bageshwar of Uttarakhand like retting and Degumming process, very little is been done to analyse the crucial properties of nettle fiber like shrinkage and wash fastness. These properties are very crucial to obtain desired quality of fibre for further processing of yarn making and weaving and in developing these fibers into fine saleable products. This research therefore is focused towards various on-field experiments which were focused on shrinkage properties conducted on cotton, nettle and cotton/nettle blended yarn samples. The objective of the study was to analyze the scope of the blended fiber for developing into wearable fabrics. For the study, after conducting the initial fiber length and fineness testing, cotton and nettle fibers were mixed in 60:40 ratio and five varieties of yarns were spun in open end spinning mill having yarn count of 3s, 5s, 6s, 7s and 8s. Samples of 100% Nettle 100% cotton fibers in 8s count were also developed for the study. All the six varieties of yarns were tested with shrinkage test and results were critically analyzed as per ASTM method D2259. It was observed that 100% Nettle has a least shrinkage of 3.36% while pure cotton has shrinkage approx. 13.6%. Yarns made of 100% Cotton exhibits four times more shrinkage than 100% Nettle. The results also show that cotton and Nettle blended yarn exhibit lower shrinkage than 100% cotton yarn. It was thus concluded that as the ratio of nettle increases in the samples, the shrinkage decreases in the samples. These results are very crucial for Uttarakhand people who want to commercially exploit the abundant nettle fiber for generating sustainable employment.Keywords: Himalayan nettle, sustainable, shrinkage, blending
Procedia PDF Downloads 24025181 Data Disorders in Healthcare Organizations: Symptoms, Diagnoses, and Treatments
Authors: Zakieh Piri, Shahla Damanabi, Peyman Rezaii Hachesoo
Abstract:
Introduction: Healthcare organizations like other organizations suffer from a number of disorders such as Business Sponsor Disorder, Business Acceptance Disorder, Cultural/Political Disorder, Data Disorder, etc. As quality in healthcare care mostly depends on the quality of data, we aimed to identify data disorders and its symptoms in two teaching hospitals. Methods: Using a self-constructed questionnaire, we asked 20 questions in related to quality and usability of patient data stored in patient records. Research population consisted of 150 managers, physicians, nurses, medical record staff who were working at the time of study. We also asked their views about the symptoms and treatments for any data disorders they mentioned in the questionnaire. Using qualitative methods we analyzed the answers. Results: After classifying the answers, we found six main data disorders: incomplete data, missed data, late data, blurred data, manipulated data, illegible data. The majority of participants believed in their important roles in treatment of data disorders while others believed in health system problems. Discussion: As clinicians have important roles in producing of data, they can easily identify symptoms and disorders of patient data. Health information managers can also play important roles in early detection of data disorders by proactively monitoring and periodic check-ups of data.Keywords: data disorders, quality, healthcare, treatment
Procedia PDF Downloads 43325180 Clinical Profile and Outcome of Type I Diabetes Mellitus at a Tertiary Care-Centre in Eastern Nepal
Authors: Gauri Shankar Shah
Abstract:
Objectives: The Type I diabetes mellitus in children is frequently a missed diagnosis and children presents in emergency with diabetic ketoacidosis having significant morbidity and mortality. The present study was done to find out the clinical presentation and outcome at a tertiary-care centre. Methods: This was retrospective analysis of data of Type I diabetes mellitus reporting to our centre during last one year (2012-2013). Results: There were 12 patients (8 males) and the age group was 4-14 years (mean ± 3.7). The presenting symptoms were fever, vomiting, altered sensorium and fast breathing in 8 (66.6%), 6 (50%), 4 (33.3%), and 4 (33.3%) cases, respectively. The classical triad of polyuria, polydypsia, and polyphagia were present only in two patients (33.2%). Seizures and epigastric pain were found in two cases each (33.2%). The four cases (33.3%) presented with diabetic ketoacidosis due to discontinuation of insulin doses, while 2 had hyperglycemia alone. The hemogram revealed mean hemoglobin of 12.1± 1.6 g/dL and total leukocyte count was 22,883.3 ± 10,345.9 per mm3, with polymorphs percentage of 73.1 ± 9.0%. The mean blood sugar at presentation was 740 ± 277 mg/ dl (544–1240). HbA1c ranged between 7.1-8.8 with mean of 8.1±0.6 %. The mean sodium, potassium, blood ph, pCO2, pO2 and bicarbonate were 140.8 ± 6.9 mEq/L, 4.4 ± 1.8mEq/L, 7.0 ± 0.2, 20.2 ± 10.8 mmHg, 112.6 ± 46.5 mmHg and 9.2 ± 8.8 mEq/L, respectively. All the patients were managed in pediatric intensive care unit as per our protocol, recovered and discharged on intermediate insulin given twice daily. Conclusions: Thus, it shows that these patients have uncontrolled hyperglycemia and often presents in emergency with ketoacidosis and deranged biochemical profile. The regular administration of insulin, frequent monitoring of blood sugar and health education are required to have better metabolic control and good quality of life.Keywords: type I diabetes mellitus, hyperglycemia, outcome, glycemic control
Procedia PDF Downloads 25425179 Big Data and Analytics in Higher Education: An Assessment of Its Status, Relevance and Future in the Republic of the Philippines
Authors: Byron Joseph A. Hallar, Annjeannette Alain D. Galang, Maria Visitacion N. Gumabay
Abstract:
One of the unique challenges provided by the twenty-first century to Philippine higher education is the utilization of Big Data. The higher education system in the Philippines is generating burgeoning amounts of data that contains relevant data that can be used to generate the information and knowledge needed for accurate data-driven decision making. This study examines the status, relevance and future of Big Data and Analytics in Philippine higher education. The insights gained from the study may be relevant to other developing nations similarly situated as the Philippines.Keywords: big data, data analytics, higher education, republic of the philippines, assessment
Procedia PDF Downloads 348