Search results for: big data interpretation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25130

Search results for: big data interpretation

24590 Voices of the Students From a Fully Inclusive Classroom

Authors: Ashwini Tiwari

Abstract:

Introduction: Inclusive education for all is a multifaceted approach that requires system thinking and the promotion of a "Culture of Inclusion." Such can only be achieved through the collaboration of multiple stakeholders at the community, regional, state, national, and international levels. Researchers have found effective practices used in inclusive general classrooms are beneficial to all students, including students with disabilities, those who experience challenges academically and socially, and students without disabilities as well. However, to date, no statistically significant effects on the academic performance of students without disabilities in the presence of students with disabilities have been revealed. Therefore, proponents against inclusive education practices, based solely on their beliefs regarding the detrimental effects of students without disabilities, appears to have unfounded perceptions. This qualitative case study examines students' perspectives and beliefs about inclusive education in a middle school in South Texas. More specifically, this study examined students understanding of how inclusive education practices intersect with the classroom community. The data was collected from the students attending fully inclusive classrooms through interviews and focus groups. The findings suggest that peer integration and friendships built during classes are an essential part of schooling for both disabled and non-disabled students. Research Methodology: This qualitative case study used observations and focus group interviews with 12 middle school students attending an inclusive classroom at a public school located in South Texas. The participant of this study includes eight females and five males. All the study participants attend a fully inclusive middle school with special needs peers. Five of the students had disabilities. The focus groups and interviews were conducted during for entire academic year, with an average of one focus group and observation each month. The data were analyzed using the constant comparative method. The data from the focus group and observation were continuously compared for emerging codes during the data collection process. Codes were further refined and merged. Themes emerged as a result of the interpretation at the end of the data analysis process. Findings and discussion: This study was conducted to examine disabled and non-disabled students' perspectives on the inclusion of disabled students. The study revealed that non-disabled students generally have positive attitudes toward their disabled peers. The students in the study did not perceive inclusion as a special provision; rather, they perceived inclusion as a way of instructional practice. Most of the participants in the study spoke about the multiple benefits of inclusion. They emphasized that peer integration and friendships built during classes are an essential part of their schooling. Students believed that it was part of their responsibility to assist their peers in the ways possible. This finding is in line with the literature that the personality of children with disabilities is not determined by their disability but rather by their social environment and its interaction with the child. Interactions with peers are one of the most important socio-cultural conditions for the development of children with disabilities.

Keywords: inclusion, special education, k-12 education, student voices

Procedia PDF Downloads 64
24589 Investigating Salafism and Its Founder

Authors: Vahid Hosseinzadeh

Abstract:

Salafism is a movement of thought-religion that was born into Sunni Islam and Hanbali sect. However, many groups and different attitudes call themselves Salafis, but they all have common characteristics, the main of which is radical and retrograde interpretation of Islamic sources. Taqi Ad-Din Ahmad ibn Taymiyyah in the Muslim world was the first thinker who established these thoughts. The authors of this article initially tried to express the meaning of Salafism and its appellation in order to focus on the beliefs and thoughts of Ibn Taymiyyah. In this way, it was tried to extract the intellectual foundations of Ibn Taymiyya from the literature and scientific works of his own using a descriptive-analytical method. Extreme focus on the appearance of Quranic phrases and opposition to any new thing that did not exist in Qur'an, Sunnah and the first 3 centuries of Islam, are among the central feature of his thoughts.

Keywords: Salafism, Ibn Taymiyyah, radical literalism, monotheism, polytheism, takfir

Procedia PDF Downloads 604
24588 Data Quality as a Pillar of Data-Driven Organizations: Exploring the Benefits of Data Mesh

Authors: Marc Bachelet, Abhijit Kumar Chatterjee, José Manuel Avila

Abstract:

Data quality is a key component of any data-driven organization. Without data quality, organizations cannot effectively make data-driven decisions, which often leads to poor business performance. Therefore, it is important for an organization to ensure that the data they use is of high quality. This is where the concept of data mesh comes in. Data mesh is an organizational and architectural decentralized approach to data management that can help organizations improve the quality of data. The concept of data mesh was first introduced in 2020. Its purpose is to decentralize data ownership, making it easier for domain experts to manage the data. This can help organizations improve data quality by reducing the reliance on centralized data teams and allowing domain experts to take charge of their data. This paper intends to discuss how a set of elements, including data mesh, are tools capable of increasing data quality. One of the key benefits of data mesh is improved metadata management. In a traditional data architecture, metadata management is typically centralized, which can lead to data silos and poor data quality. With data mesh, metadata is managed in a decentralized manner, ensuring accurate and up-to-date metadata, thereby improving data quality. Another benefit of data mesh is the clarification of roles and responsibilities. In a traditional data architecture, data teams are responsible for managing all aspects of data, which can lead to confusion and ambiguity in responsibilities. With data mesh, domain experts are responsible for managing their own data, which can help provide clarity in roles and responsibilities and improve data quality. Additionally, data mesh can also contribute to a new form of organization that is more agile and adaptable. By decentralizing data ownership, organizations can respond more quickly to changes in their business environment, which in turn can help improve overall performance by allowing better insights into business as an effect of better reports and visualization tools. Monitoring and analytics are also important aspects of data quality. With data mesh, monitoring, and analytics are decentralized, allowing domain experts to monitor and analyze their own data. This will help in identifying and addressing data quality problems in quick time, leading to improved data quality. Data culture is another major aspect of data quality. With data mesh, domain experts are encouraged to take ownership of their data, which can help create a data-driven culture within the organization. This can lead to improved data quality and better business outcomes. Finally, the paper explores the contribution of AI in the coming years. AI can help enhance data quality by automating many data-related tasks, like data cleaning and data validation. By integrating AI into data mesh, organizations can further enhance the quality of their data. The concepts mentioned above are illustrated by AEKIDEN experience feedback. AEKIDEN is an international data-driven consultancy that has successfully implemented a data mesh approach. By sharing their experience, AEKIDEN can help other organizations understand the benefits and challenges of implementing data mesh and improving data quality.

Keywords: data culture, data-driven organization, data mesh, data quality for business success

Procedia PDF Downloads 119
24587 Evaluation of the Notifiable Diseases Surveillance System, South, Haiti, 2022

Authors: Djeamsly Salomon

Abstract:

Background: Epidemiological surveillance is a dynamic national system used to observe all aspects of the evolution of priority health problems, through: collection, analysis, systematic interpretation of information, and dissemination of results with necessary recommendations. The study was conducted to assess the mandatory disease surveillance system in the Sud Department. Methods: A study was conducted from March to May 2021 with key players involved in surveillance at the level of health institutions in the department . The CDC's 2021 updated guideline was used to evaluate the system. We collected information about the operation, attributes, and usefulness of the surveillance system using interviewer-administered questionnaires. Epi-Info7.2 and Excel 2016 were used to generate the mean, frequencies and proportions. Results: Of 30 participants, 23 (77%) were women. The average age was 39 years[30-56]. 25 (83%) had training in epidemiological surveillance. (50%) of the forms checked were signed by the supervisor. Collection tools were available at (80%). Knowledge of at least 7 notifiable diseases was high (100%). Among the respondents, 29 declared that the collection tools were simple, 27 had already filled in a notification form. The maximum time taken to fill out a form was 10 minutes. The feedback between the different levels was done at (60%). Conclusion: The surveillance system is useful, simple, acceptable, representative, flexible, stable and responsive. The data generated was of high quality. However, it is threatened by the lack of supervision of sentinel sites, lack of investigation and weak feedback. This evaluation demonstrated the urgent need to improve supervision in the sites and to feedback information. Strengthen epidemiological surveillance.

Keywords: evaluation, notifiable diseases, surveillance, system

Procedia PDF Downloads 65
24586 Big Data Analysis with RHadoop

Authors: Ji Eun Shin, Byung Ho Jung, Dong Hoon Lim

Abstract:

It is almost impossible to store or analyze big data increasing exponentially with traditional technologies. Hadoop is a new technology to make that possible. R programming language is by far the most popular statistical tool for big data analysis based on distributed processing with Hadoop technology. With RHadoop that integrates R and Hadoop environment, we implemented parallel multiple regression analysis with different sizes of actual data. Experimental results showed our RHadoop system was much faster as the number of data nodes increases. We also compared the performance of our RHadoop with lm function and big lm packages available on big memory. The results showed that our RHadoop was faster than other packages owing to paralleling processing with increasing the number of map tasks as the size of data increases.

Keywords: big data, Hadoop, parallel regression analysis, R, RHadoop

Procedia PDF Downloads 421
24585 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels, so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to exponential growth of computation, this paper also proposes a key data extraction method, that only extracts part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: data augmentation, mutex task generation, meta-learning, text classification.

Procedia PDF Downloads 80
24584 Efficient Positioning of Data Aggregation Point for Wireless Sensor Network

Authors: Sifat Rahman Ahona, Rifat Tasnim, Naima Hassan

Abstract:

Data aggregation is a helpful technique for reducing the data communication overhead in wireless sensor network. One of the important tasks of data aggregation is positioning of the aggregator points. There are a lot of works done on data aggregation. But, efficient positioning of the aggregators points is not focused so much. In this paper, authors are focusing on the positioning or the placement of the aggregation points in wireless sensor network. Authors proposed an algorithm to select the aggregators positions for a scenario where aggregator nodes are more powerful than sensor nodes.

Keywords: aggregation point, data communication, data aggregation, wireless sensor network

Procedia PDF Downloads 144
24583 Spatial Econometric Approaches for Count Data: An Overview and New Directions

Authors: Paula Simões, Isabel Natário

Abstract:

This paper reviews a number of theoretical aspects for implementing an explicit spatial perspective in econometrics for modelling non-continuous data, in general, and count data, in particular. It provides an overview of the several spatial econometric approaches that are available to model data that are collected with reference to location in space, from the classical spatial econometrics approaches to the recent developments on spatial econometrics to model count data, in a Bayesian hierarchical setting. Considerable attention is paid to the inferential framework, necessary for structural consistent spatial econometric count models, incorporating spatial lag autocorrelation, to the corresponding estimation and testing procedures for different assumptions, to the constrains and implications embedded in the various specifications in the literature. This review combines insights from the classical spatial econometrics literature as well as from hierarchical modeling and analysis of spatial data, in order to look for new possible directions on the processing of count data, in a spatial hierarchical Bayesian econometric context.

Keywords: spatial data analysis, spatial econometrics, Bayesian hierarchical models, count data

Procedia PDF Downloads 576
24582 A NoSQL Based Approach for Real-Time Managing of Robotics's Data

Authors: Gueidi Afef, Gharsellaoui Hamza, Ben Ahmed Samir

Abstract:

This paper deals with the secret of the continual progression data that new data management solutions have been emerged: The NoSQL databases. They crossed several areas like personalization, profile management, big data in real-time, content management, catalog, view of customers, mobile applications, internet of things, digital communication and fraud detection. Nowadays, these database management systems are increasing. These systems store data very well and with the trend of big data, a new challenge’s store demands new structures and methods for managing enterprise data. The new intelligent machine in the e-learning sector, thrives on more data, so smart machines can learn more and faster. The robotics are our use case to focus on our test. The implementation of NoSQL for Robotics wrestle all the data they acquire into usable form because with the ordinary type of robotics; we are facing very big limits to manage and find the exact information in real-time. Our original proposed approach was demonstrated by experimental studies and running example used as a use case.

Keywords: NoSQL databases, database management systems, robotics, big data

Procedia PDF Downloads 333
24581 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 167
24580 Emotional Awareness and Working Memory as Predictive Factors for the Habitual Use of Cognitive Reappraisal among Adolescents

Authors: Yuri Kitahara

Abstract:

Background: Cognitive reappraisal refers to an emotion regulation strategy in which one changes the interpretation of emotion-eliciting events. Numerous studies show that cognitive reappraisal is associated with mental health and better social functioning. However the examination of the predictive factors of adaptive emotion regulation remains as an issue. The present study examined the factors contributing to the habitual use of cognitive reappraisal, with a focus on emotional awareness and working memory. Methods: Data was collected from 30 junior high school students, using a Japanese version of the Emotion Regulation Questionnaire (ERQ), the Levels of Emotional Awareness Scale for Children (LEAS-C), and N-back task. Results: A positive correlation between emotional awareness and cognitive reappraisal was observed in the high-working-memory group (r = .54, p < .05), whereas no significant relationship was found in the low-working-memory group. In addition, the results of the analysis of variance (ANOVA) showed a significant interaction between emotional awareness and working memory capacity (F(1, 26) = 7.74, p < .05). Subsequent analysis of simple main effects confirmed that high working memory capacity significantly increases the use of cognitive reappraisal for high-emotional-awareness subjects, and significantly decreases the use of cognitive reappraisal for low-emotional-awareness subjects. Discussion: These results indicate that under the condition when one has an adequate ability for simultaneous processing of information, explicit understanding of emotion would contribute to adaptive cognitive emotion regulation. The findings are discussed along with neuroscientific claims.

Keywords: cognitive reappraisal, emotional awareness, emotion regulation, working memory

Procedia PDF Downloads 213
24579 Modeling Activity Pattern Using XGBoost for Mining Smart Card Data

Authors: Eui-Jin Kim, Hasik Lee, Su-Jin Park, Dong-Kyu Kim

Abstract:

Smart-card data are expected to provide information on activity pattern as an alternative to conventional person trip surveys. The focus of this study is to propose a method for training the person trip surveys to supplement the smart-card data that does not contain the purpose of each trip. We selected only available features from smart card data such as spatiotemporal information on the trip and geographic information system (GIS) data near the stations to train the survey data. XGboost, which is state-of-the-art tree-based ensemble classifier, was used to train data from multiple sources. This classifier uses a more regularized model formalization to control the over-fitting and show very fast execution time with well-performance. The validation results showed that proposed method efficiently estimated the trip purpose. GIS data of station and duration of stay at the destination were significant features in modeling trip purpose.

Keywords: activity pattern, data fusion, smart-card, XGboost

Procedia PDF Downloads 227
24578 Emulation Model in Architectural Education

Authors: Ö. Şenyiğit, A. Çolak

Abstract:

It is of great importance for an architectural student to know the parameters through which he/she can conduct his/her design and makes his/her design effective in architectural education. Therefore; an empirical application study was carried out through the designing activity using the emulation model to support the design and design approaches of architectural students. During the investigation period, studies were done on the basic design elements and principles of the fall semester, and the emulation model, one of the designing methods that constitute the subject of the study, was fictionalized as three phased “recognition-interpretation-application”. As a result of the study, it was observed that when students were given a key method during the design process, their awareness increased and their aspects improved as well.

Keywords: basic design, design education, design methods, emulation

Procedia PDF Downloads 217
24577 A Mutually Exclusive Task Generation Method Based on Data Augmentation

Authors: Haojie Wang, Xun Li, Rui Yin

Abstract:

In order to solve the memorization overfitting in the model-agnostic meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to an exponential growth of computation, this paper also proposes a key data extraction method that only extract part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.

Keywords: mutex task generation, data augmentation, meta-learning, text classification.

Procedia PDF Downloads 120
24576 Revolutionizing Traditional Farming Using Big Data/Cloud Computing: A Review on Vertical Farming

Authors: Milind Chaudhari, Suhail Balasinor

Abstract:

Due to massive deforestation and an ever-increasing population, the organic content of the soil is depleting at a much faster rate. Due to this, there is a big chance that the entire food production in the world will drop by 40% in the next two decades. Vertical farming can help in aiding food production by leveraging big data and cloud computing to ensure plants are grown naturally by providing the optimum nutrients sunlight by analyzing millions of data points. This paper outlines the most important parameters in vertical farming and how a combination of big data and AI helps in calculating and analyzing these millions of data points. Finally, the paper outlines how different organizations are controlling the indoor environment by leveraging big data in enhancing food quantity and quality.

Keywords: big data, IoT, vertical farming, indoor farming

Procedia PDF Downloads 161
24575 A Comparative Study of Medical Image Segmentation Methods for Tumor Detection

Authors: Mayssa Bensalah, Atef Boujelben, Mouna Baklouti, Mohamed Abid

Abstract:

Image segmentation has a fundamental role in analysis and interpretation for many applications. The automated segmentation of organs and tissues throughout the body using computed imaging has been rapidly increasing. Indeed, it represents one of the most important parts of clinical diagnostic tools. In this paper, we discuss a thorough literature review of recent methods of tumour segmentation from medical images which are briefly explained with the recent contribution of various researchers. This study was followed by comparing these methods in order to define new directions to develop and improve the performance of the segmentation of the tumour area from medical images.

Keywords: features extraction, image segmentation, medical images, tumor detection

Procedia PDF Downloads 153
24574 Data Challenges Facing Implementation of Road Safety Management Systems in Egypt

Authors: A. Anis, W. Bekheet, A. El Hakim

Abstract:

Implementing a Road Safety Management System (SMS) in a crowded developing country such as Egypt is a necessity. Beginning a sustainable SMS requires a comprehensive reliable data system for all information pertinent to road crashes. In this paper, a survey for the available data in Egypt and validating it for using in an SMS in Egypt. The research provides some missing data, and refer to the unavailable data in Egypt, looking forward to the contribution of the scientific society, the authorities, and the public in solving the problem of missing or unreliable crash data. The required data for implementing an SMS in Egypt are divided into three categories; the first is available data such as fatality and injury rates and it is proven in this research that it may be inconsistent and unreliable, the second category of data is not available, but it may be estimated, an example of estimating vehicle cost is available in this research, the third is not available and can be measured case by case such as the functional and geometric properties of a facility. Some inquiries are provided in this research for the scientific society, such as how to improve the links among stakeholders of road safety in order to obtain a consistent, non-biased, and reliable data system.

Keywords: road safety management system, road crash, road fatality, road injury

Procedia PDF Downloads 112
24573 Hot Forging Process Simulation of Outer Tie Rod to Reduce Forming Load

Authors: Kyo Jin An, Bukyo Seo, Young-Chul Park

Abstract:

The current trend in car market is increase of parts of automobile and weight in vehicle. It comes from improvement of vehicle performance. Outer tie rod is a part of component of steering system and it is lighter than the others. But, weight lightening is still required for improvement of car mileage. So, we have presented a model of aluminized outer tie rod, but the process of fabrication has to be checked to manufacture the product. Therefore, we have anticipated forming load, die stress and abrasion to use the program of forging interpretation in the part of hot forging process of outer tie rod in this study. Also, we have implemented the experiments design to use the table of orthogonal arrays to reduce the forming load.

Keywords: forming load, hot forging, orthogonal array, outer tie rod (OTR), multi–step forging

Procedia PDF Downloads 421
24572 Big Data-Driven Smart Policing: Big Data-Based Patrol Car Dispatching in Abu Dhabi, UAE

Authors: Oualid Walid Ben Ali

Abstract:

Big Data has become one of the buzzwords today. The recent explosion of digital data has led the organization, either private or public, to a new era towards a more efficient decision making. At some point, business decided to use that concept in order to learn what make their clients tick with phrases like ‘sales funnel’ analysis, ‘actionable insights’, and ‘positive business impact’. So, it stands to reason that Big Data was viewed through green (read: money) colored lenses. Somewhere along the line, however someone realized that collecting and processing data doesn’t have to be for business purpose only, but also could be used for other purposes to assist law enforcement or to improve policing or in road safety. This paper presents briefly, how Big Data have been used in the fields of policing order to improve the decision making process in the daily operation of the police. As example, we present a big-data driven system which is sued to accurately dispatch the patrol cars in a geographic environment. The system is also used to allocate, in real-time, the nearest patrol car to the location of an incident. This system has been implemented and applied in the Emirate of Abu Dhabi in the UAE.

Keywords: big data, big data analytics, patrol car allocation, dispatching, GIS, intelligent, Abu Dhabi, police, UAE

Procedia PDF Downloads 475
24571 Potential Contribution of Blue Oceans for Growth of Universities: Case of Faculties of Agriculture in Public Universities in Zimbabwe

Authors: Wonder Ngezimana, Benjamin Alex Madzivire

Abstract:

As new public universities are being applauded for being promulgated in Zimbabwe, there is need for comprehensive plan for ensuring sustainable competitive advantages in their niche mandated areas. Unhealthy competition between university faculties for enrolment hinders growth of the newly established universities faculties, especially in the agricultural sciences related disciplines. Blue ocean metaphor is based on creation of competitor-free market unlike 'red oceans', which are well explored and crowded with competitors. This study seeks to explore the potential contribution of blue oceans strategy (BOS) for growth of universities with bias towards faculties of agriculture in public universities in Zimbabwe. Case studies with agricultural sciences related disciplines were selected across three universities for interviewing. Data was collected through 10 open ended questions on academics in different management positions within university faculties of agriculture. Summative analysis was thereafter used during coding and interpretation of the data. Study findings show that there are several important elements for making offerings more comprehendible towards fostering faculty growth and performance with bias towards student enrolment. The results points towards BOS form of value innovations with various elements to consider in faculty offerings. To create valued innovation beyond the red oceans, the cases in this study have to be modelled to foster changes in enrolment, modes of delivery, certification, being research oriented with excellence in teaching, ethics, service to the community and entrepreneurship. There is, therefore, need to rethink strategy towards reshaping inclusive enrolment, industry relevance, affiliations, lifelong learning, sustainable student welfare, ubuntu, exchange programmes, research excellence, alumni support and entrepreneurship. Innovative strategic collaborations and partnerships, anchored on technology boost the strategic offerings henceforth leveraging on various offerings in this study. Areas of further study include the amplitude of blue oceans shown in the university faculty offerings and implementation strategies of BOS.

Keywords: blue oceans strategy, collaborations, faculty offerings, value innovations

Procedia PDF Downloads 128
24570 Evaluating the Impact of Judicial Review of 2003 “Radical Surgery” Purging Corrupt Officials from Kenyan Courts

Authors: Charles A. Khamala

Abstract:

In 2003, constrained by an absent “rule of law culture” and negative economic growth, the new Kenyan government chose to pursue incremental judicial reforms rather than comprehensive constitutional reforms. President Mwai Kibaki’s first administration’s judicial reform strategy was two pronged. First, to implement unprecedented “radical surgery,” he appointed a new Chief Justice who instrumentally recommended that half the purportedly-corrupt judiciary should be removed by Presidential tribunals of inquiry. Second, the replacement High Court judges, initially, instrumentally-endorsed the “radical surgery’s” administrative decisions removing their corrupt predecessors. Meanwhile, retention of the welfare-reducing Constitution perpetuated declining public confidence in judicial institutions culminating in refusal by the dissatisfied opposition party to petition the disputed 2007 presidential election results, alleging biased and corrupt courts. Fatefully, widespread post-election violence ensued. Consequently, the international community prompted the second Kibaki administration to concede to a new Constitution. Suddenly, the High Court then adopted a non-instrumental interpretation to reject the 2003 “radical surgery.” This paper therefore critically analyzes whether the Kenyan court’s inconsistent interpretations–pertaining to the constitutionality of the 2003 “radical surgery” removing corruption from Kenya’s courts–was predicated on political expediency or human rights principles. If justice “must also seen to be done,” then pursuit of the CJ’s, Judicial Service Commission’s and president’s political or economic interests must be limited by respect for the suspected judges and magistrates’ due process rights. The separation of powers doctrine demands that the dismissed judges should have a right of appeal which entails impartial review by a special independent oversight mechanism. Instead, ignoring fundamental rights, Kenya’s new Supreme Court’s interpretation of another round of vetting under the new 2010 Constitution, ousts the High Court’s judicial review jurisdiction altogether, since removal of judicial corruption is “a constitutional imperative, akin to a national duty upon every judicial officer to pave way for judicial realignment and reformulation.”

Keywords: administrative decisions, corruption, fair hearing, judicial review, (non) instrumental

Procedia PDF Downloads 460
24569 Mining Multicity Urban Data for Sustainable Population Relocation

Authors: Xu Du, Aparna S. Varde

Abstract:

In this research, we propose to conduct diagnostic and predictive analysis about the key factors and consequences of urban population relocation. To achieve this goal, urban simulation models extract the urban development trends as land use change patterns from a variety of data sources. The results are treated as part of urban big data with other information such as population change and economic conditions. Multiple data mining methods are deployed on this data to analyze nonlinear relationships between parameters. The result determines the driving force of population relocation with respect to urban sprawl and urban sustainability and their related parameters. Experiments so far reveal that data mining methods discover useful knowledge from the multicity urban data. This work sets the stage for developing a comprehensive urban simulation model for catering to specific questions by targeted users. It contributes towards achieving sustainability as a whole.

Keywords: data mining, environmental modeling, sustainability, urban planning

Procedia PDF Downloads 283
24568 Role of Geohydrology in Groundwater Management-Case Study of Pachod Village, Maharashtra, India

Authors: Ashok Tejankar, Rohan K. Pathrikar

Abstract:

Maharashtra is covered by heterogeneous flows of Deccan basaltic terrains of upper cretaceous to lower Eocene age. It consist mainly different types of basalt flow, having heterogeneous Geohydrological characters. The study area Aurangabad dist. lies in the central part of Maharashtra. The study area is typically covered by Deccan traps formation mainly basalt type of igneous volcanic rock. The area is located in the survey of India toposheet No. 47M and laying between 19° to 20° north latitudes and 74° to 76° east longitudes. Groundwater is the primary source for fresh water in the study area. There has been a growing demand for fresh water in domestic & agriculture sectors. Due to over exploitation and rainfall failure has been created an irrecoverable stress on groundwater in study area. In an effort to maintain the water table condition in balance, artificial recharge is being implemented. The selection of site for artificial recharge is a very important task in recharge basalt. The present study aims at sitting artificial recharge structure at village Pachod in basaltic terrain of the Godavari-Purna river basin in Aurangabad district of Maharashtra, India. where the average annual rainfall is 650mm. In this investigation, integrated remote sensing and GIS techniques were used and various parameters like lithology, structure, etc. aspect of drainage basins, landforms and other parameters were extracted from visual interpretation of IRS P6 Satellite data and Survey of India (SIO) topographical sheets, aided by field checks by carrying well inventory survey. The depth of weathered material, water table conditions, and rainfall data were been considered. All the thematic information layers were digitized and analyzed in Arc-GIS environment and the composite maps produced show suitable site, depth of bed rock flows for successful artificial recharge in village Pachod to increase groundwater potential of low laying area.

Keywords: hard rock, artificial recharge, remote sensing, GIS

Procedia PDF Downloads 280
24567 Model Order Reduction for Frequency Response and Effect of Order of Method for Matching Condition

Authors: Aref Ghafouri, Mohammad javad Mollakazemi, Farhad Asadi

Abstract:

In this paper, model order reduction method is used for approximation in linear and nonlinearity aspects in some experimental data. This method can be used for obtaining offline reduced model for approximation of experimental data and can produce and follow the data and order of system and also it can match to experimental data in some frequency ratios. In this study, the method is compared in different experimental data and influence of choosing of order of the model reduction for obtaining the best and sufficient matching condition for following the data is investigated in format of imaginary and reality part of the frequency response curve and finally the effect and important parameter of number of order reduction in nonlinear experimental data is explained further.

Keywords: frequency response, order of model reduction, frequency matching condition, nonlinear experimental data

Procedia PDF Downloads 382
24566 An Empirical Study of the Impacts of Big Data on Firm Performance

Authors: Thuan Nguyen

Abstract:

In the present time, data to a data-driven knowledge-based economy is the same as oil to the industrial age hundreds of years ago. Data is everywhere in vast volumes! Big data analytics is expected to help firms not only efficiently improve performance but also completely transform how they should run their business. However, employing the emergent technology successfully is not easy, and assessing the roles of big data in improving firm performance is even much harder. There was a lack of studies that have examined the impacts of big data analytics on organizational performance. This study aimed to fill the gap. The present study suggested using firms’ intellectual capital as a proxy for big data in evaluating its impact on organizational performance. The present study employed the Value Added Intellectual Coefficient method to measure firm intellectual capital, via its three main components: human capital efficiency, structural capital efficiency, and capital employed efficiency, and then used the structural equation modeling technique to model the data and test the models. The financial fundamental and market data of 100 randomly selected publicly listed firms were collected. The results of the tests showed that only human capital efficiency had a significant positive impact on firm profitability, which highlighted the prominent human role in the impact of big data technology.

Keywords: big data, big data analytics, intellectual capital, organizational performance, value added intellectual coefficient

Procedia PDF Downloads 225
24565 Automated Test Data Generation For some types of Algorithm

Authors: Hitesh Tahbildar

Abstract:

The cost of test data generation for a program is computationally very high. In general case, no algorithm to generate test data for all types of algorithms has been found. The cost of generating test data for different types of algorithm is different. Till date, people are emphasizing the need to generate test data for different types of programming constructs rather than different types of algorithms. The test data generation methods have been implemented to find heuristics for different types of algorithms. Some algorithms that includes divide and conquer, backtracking, greedy approach, dynamic programming to find the minimum cost of test data generation have been tested. Our experimental results say that some of these types of algorithm can be used as a necessary condition for selecting heuristics and programming constructs are sufficient condition for selecting our heuristics. Finally we recommend the different heuristics for test data generation to be selected for different types of algorithms.

Keywords: ongest path, saturation point, lmax, kL, kS

Procedia PDF Downloads 388
24564 The Perspective on Data Collection Instruments for Younger Learners

Authors: Hatice Kübra Koç

Abstract:

For academia, collecting reliable and valid data is one of the most significant issues for researchers. However, it is not the same procedure for all different target groups; meanwhile, during data collection from teenagers, young adults, or adults, researchers can use common data collection tools such as questionnaires, interviews, and semi-structured interviews; yet, for young learners and very young ones, these reliable and valid data collection tools cannot be easily designed or applied by the researchers. In this study, firstly, common data collection tools are examined for ‘very young’ and ‘young learners’ participant groups since it is thought that the quality and efficiency of an academic study is mainly based on its valid and correct data collection and data analysis procedure. Secondly, two different data collection instruments for very young and young learners are stated as discussing the efficacy of them. Finally, a suggested data collection tool – a performance-based questionnaire- which is specifically developed for ‘very young’ and ‘young learners’ participant groups in the field of teaching English to young learners as a foreign language is presented in this current study. The designing procedure and suggested items/factors for the suggested data collection tool are accordingly revealed at the end of the study to help researchers have studied with young and very learners.

Keywords: data collection instruments, performance-based questionnaire, young learners, very young learners

Procedia PDF Downloads 72
24563 Performance of the Strong Stability Method in the Univariate Classical Risk Model

Authors: Safia Hocine, Zina Benouaret, Djamil A¨ıssani

Abstract:

In this paper, we study the performance of the strong stability method of the univariate classical risk model. We interest to the stability bounds established using two approaches. The first based on the strong stability method developed for a general Markov chains. The second approach based on the regenerative processes theory . By adopting an algorithmic procedure, we study the performance of the stability method in the case of exponential distribution claim amounts. After presenting numerically and graphically the stability bounds, an interpretation and comparison of the results have been done.

Keywords: Marcov chain, regenerative process, risk model, ruin probability, strong stability

Procedia PDF Downloads 304
24562 Generating Swarm Satellite Data Using Long Short-Term Memory and Generative Adversarial Networks for the Detection of Seismic Precursors

Authors: Yaxin Bi

Abstract:

Accurate prediction and understanding of the evolution mechanisms of earthquakes remain challenging in the fields of geology, geophysics, and seismology. This study leverages Long Short-Term Memory (LSTM) networks and Generative Adversarial Networks (GANs), a generative model tailored to time-series data, for generating synthetic time series data based on Swarm satellite data, which will be used for detecting seismic anomalies. LSTMs demonstrated commendable predictive performance in generating synthetic data across multiple countries. In contrast, the GAN models struggled to generate synthetic data, often producing non-informative values, although they were able to capture the data distribution of the time series. These findings highlight both the promise and challenges associated with applying deep learning techniques to generate synthetic data, underscoring the potential of deep learning in generating synthetic electromagnetic satellite data.

Keywords: LSTM, GAN, earthquake, synthetic data, generative AI, seismic precursors

Procedia PDF Downloads 15
24561 Survey Study of Integrative and Instrumental Motivation in English Language Learning of First Year Students at Naresuan University International College (NUIC), Thailand

Authors: Don August G. Delgado

Abstract:

Foreign Language acquisition without enough motivation is tough because it is the force that drives students’ interest or enthusiasm to achieve learning. In addition, it also serves as the students’ beacon to achieve their goals, desires, dreams, and aspirations in life. Since it plays an integral factor in language learning acquisition, this study focuses on the integrative and instrumental motivation levels of all the first year students of Naresuan University International College. The identification of their motivation level and inclination in learning the English language will greatly help all NUIC lecturers and administrators to create a project or activities that they will truly enjoy and find worth doing. However, if the findings of this study will say otherwise, this study can also show to NUIC lecturers and administrators how they can help and transform NUIC freshmen on becoming motivated learners to enhance their English proficiency levels. All respondents in this study received an adopted and developed questionnaire from different researches in the same perspective. The questionnaire has 24 questions that were randomly arranged; 12 for integrative motivation and 12 for instrumental motivation. The questionnaire employed the five-point Likert scale. The tabulated data were analyzed according to its means and standard deviations using the Standard Deviation Calculator. In order to interpret the motivation level of the respondents, the Interpretation of Mean Scores was utilized. Thus, this study concludes that majority of the NUIC freshmen are neither integratively motivated nor instrumentally motivated students.

Keywords: motivation, integrative, foreign language acquisition, instrumental

Procedia PDF Downloads 216