Search results for: missing data estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26544

Search results for: missing data estimation

23004 Policy Effectiveness in the Situation of Economic Recession

Authors: S. K. Ashiquer Rahman

Abstract:

The proper policy handling might not able to attain the target since some of recessions, e.g., pandemic-led crises, the variables shocks of the economics. At the level of this situation, the Central bank implements the monetary policy to choose increase the exogenous expenditure and level of money supply consecutively for booster level economic growth, whether the monetary policy is relatively more effective than fiscal policy in altering real output growth of a country or both stand for relatively effective in the direction of output growth of a country. The dispute with reference to the relationship between the monetary policy and fiscal policy is centered on the inflationary penalty of the shortfall financing by the fiscal authority. The latest variables socks of economics as well as the pandemic-led crises, central banks around the world predicted just about a general dilemma in relation to increase rates to face the or decrease rates to sustain the economic movement. Whether the prices hang about fundamentally unaffected, the aggregate demand has also been hold a significantly negative attitude by the outbreak COVID-19 pandemic. To empirically investigate the effects of economics shocks associated COVID-19 pandemic, the paper considers the effectiveness of the monetary policy and fiscal policy that linked to the adjustment mechanism of different economic variables. To examine the effects of economics shock associated COVID-19 pandemic towards the effectiveness of Monetary Policy and Fiscal Policy in the direction of output growth of a Country, this paper uses the Simultaneous equations model under the estimation of Two-Stage Least Squares (2SLS) and Ordinary Least Squares (OLS) Method.

Keywords: IS-LM framework, pandemic. Economics variables shocks, simultaneous equations model, output growth

Procedia PDF Downloads 99
23003 Nowcasting Indonesian Economy

Authors: Ferry Kurniawan

Abstract:

In this paper, we nowcast quarterly output growth in Indonesia by exploiting higher frequency data (monthly indicators) using a mixed-frequency factor model and exploiting both quarterly and monthly data. Nowcasting quarterly GDP in Indonesia is particularly relevant for the central bank of Indonesia which set the policy rate in the monthly Board of Governors Meeting; whereby one of the important step is the assessment of the current state of the economy. Thus, having an accurate and up-to-date quarterly GDP nowcast every time new monthly information becomes available would clearly be of interest for central bank of Indonesia, for example, as the initial assessment of the current state of the economy -including nowcast- will be used as input for longer term forecast. We consider a small scale mixed-frequency factor model to produce nowcasts. In particular, we specify variables as year-on-year growth rates thus the relation between quarterly and monthly data is expressed in year-on-year growth rates. To assess the performance of the model, we compare the nowcasts with two other approaches: autoregressive model –which is often difficult when forecasting output growth- and Mixed Data Sampling (MIDAS) regression. In particular, both mixed frequency factor model and MIDAS nowcasts are produced by exploiting the same set of monthly indicators. Hence, we compare the nowcasts performance of the two approaches directly. To preview the results, we find that by exploiting monthly indicators using mixed-frequency factor model and MIDAS regression we improve the nowcast accuracy over a benchmark simple autoregressive model that uses only quarterly frequency data. However, it is not clear whether the MIDAS or mixed-frequency factor model is better. Neither set of nowcasts encompasses the other; suggesting that both nowcasts are valuable in nowcasting GDP but neither is sufficient. By combining the two individual nowcasts, we find that the nowcast combination not only increases the accuracy - relative to individual nowcasts- but also lowers the risk of the worst performance of the individual nowcasts.

Keywords: nowcasting, mixed-frequency data, factor model, nowcasts combination

Procedia PDF Downloads 332
23002 Real-Time Image Encryption Using a 3D Discrete Dual Chaotic Cipher

Authors: M. F. Haroun, T. A. Gulliver

Abstract:

In this paper, an encryption algorithm is proposed for real-time image encryption. The scheme employs a dual chaotic generator based on a three dimensional (3D) discrete Lorenz attractor. Encryption is achieved using non-autonomous modulation where the data is injected into the dynamics of the master chaotic generator. The second generator is used to permute the dynamics of the master generator using the same approach. Since the data stream can be regarded as a random source, the resulting permutations of the generator dynamics greatly increase the security of the transmitted signal. In addition, a technique is proposed to mitigate the error propagation due to the finite precision arithmetic of digital hardware. In particular, truncation and rounding errors are eliminated by employing an integer representation of the data which can easily be implemented. The simple hardware architecture of the algorithm makes it suitable for secure real-time applications.

Keywords: chaotic systems, image encryption, non-autonomous modulation, FPGA

Procedia PDF Downloads 509
23001 A Secure System for Handling Information from Heterogeous Sources

Authors: Shoohira Aftab, Hammad Afzal

Abstract:

Information integration is a well known procedure to provide consolidated view on sets of heterogeneous information sources. It not only provides better statistical analysis of information but also facilitates users to query without any knowledge on the underlying heterogeneous information sources The problem of providing a consolidated view of information can be handled using Semantic data (information stored in such a way that is understandable by machines and integrate-able without manual human intervention). However, integrating information using semantic web technology without any access management enforced, will results in increase of privacy and confidentiality concerns. In this research we have designed and developed a framework that would allow information from heterogeneous formats to be consolidated, thus resolving the issue of interoperability. We have also devised an access control system for defining explicit privacy constraints. We designed and applied our framework on both semantic and non-semantic data from heterogeneous resources. Our approach is validated using scenario based testing.

Keywords: information integration, semantic data, interoperability, security, access control system

Procedia PDF Downloads 359
23000 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).

Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures

Procedia PDF Downloads 357
22999 Refractive Index, Excess Molar Volume and Viscometric Study of Binary Liquid Mixture of Morpholine with Cumene at 298.15 K, 303.15 K, and 308.15 K

Authors: B. K. Gill, Himani Sharma, V. K. Rattan

Abstract:

Experimental data of refractive index, excess molar volume and viscosity of binary mixture of morpholine with cumene over the whole composition range at 298.15 K, 303.15 K, 308.15 K and normal atmospheric pressure have been measured. The experimental data were used to compute the density, deviation in molar refraction, deviation in viscosity and excess Gibbs free energy of activation as a function of composition. The experimental viscosity data have been correlated with empirical equations like Grunberg- Nissan, Herric correlation and three body McAllister’s equation. The excess thermodynamic properties were fitted to Redlich-Kister polynomial equation. The variation of these properties with composition and temperature of the binary mixtures are discussed in terms of intermolecular interactions.

Keywords: cumene, excess Gibbs free energy, excess molar volume, morpholine

Procedia PDF Downloads 331
22998 Anthropometric Data Variation within Gari-Frying Population

Authors: T. M. Samuel, O. O. Aremu, I. O. Ismaila, L. I. Onu, B. O. Adetifa, S. E. Adegbite, O. O. Olokoshe

Abstract:

The imperative of anthropometry in designing to fit cannot be overemphasized. Of essence is the variability of measurements among population for which data is collected. In this paper anthropometric data were collected for the design of gari-frying facility such that work system would be designed to fit the gari-frying population in the Southwestern states of Nigeria comprising Lagos, Ogun, Oyo, Osun, Ondo, and Ekiti. Twenty-seven body dimensions were measured among 120 gari-frying processors. Statistical analysis was performed using SPSS package to determine the mean, standard deviation, minimum value, maximum value and percentiles (2nd, 5th, 25th, 50th, 75th, 95th, and 98th) of the different anthropometric parameters. One sample t-test was conducted to determine the variation within the population. The 50th percentiles of some of the anthropometric parameters were compared with those from other populations in literature. The correlation between the worker’s age and the body anthropometry was also investigated.The mean weight, height, shoulder height (sitting), eye height (standing) and eye height (sitting) are 63.37 kg, 1.57 m, 0.55 m, 1.45 m, and 0.67 m respectively.Result also shows a high correlation with other populations and a statistically significant difference in variability of data within the population in all the body dimensions measured. With a mean age of 42.36 years, results shows that age will be a wrong indicator for estimating the anthropometry for the population.

Keywords: anthropometry, cassava processing, design to fit, gari-frying, workstation design

Procedia PDF Downloads 257
22997 Discovering Event Outliers for Drug as Commercial Products

Authors: Arunas Burinskas, Aurelija Burinskiene

Abstract:

On average, ten percent of drugs - commercial products are not available in pharmacies due to shortage. The shortage event disbalance sales and requires a recovery period, which is too long. Therefore, one of the critical issues that pharmacies do not record potential sales transactions during shortage and recovery periods. The authors suggest estimating outliers during shortage and recovery periods. To shorten the recovery period, the authors suggest using average sales per sales day prediction, which helps to protect the data from being downwards or upwards. Authors use the outlier’s visualization method across different drugs and apply the Grubbs test for significance evaluation. The researched sample is 100 drugs in a one-month time frame. The authors detected that high demand variability products had outliers. Among analyzed drugs, which are commercial products i) High demand variability drugs have a one-week shortage period, and the probability of facing a shortage is equal to 69.23%. ii) Mid demand variability drugs have three days shortage period, and the likelihood to fall into deficit is equal to 34.62%. To avoid shortage events and minimize the recovery period, real data must be set up. Even though there are some outlier detection methods for drug data cleaning, they have not been used for the minimization of recovery period once a shortage has occurred. The authors use Grubbs’ test real-life data cleaning method for outliers’ adjustment. In the paper, the outliers’ adjustment method is applied with a confidence level of 99%. In practice, the Grubbs’ test was used to detect outliers for cancer drugs and reported positive results. The application of the Grubbs’ test is used to detect outliers which exceed boundaries of normal distribution. The result is a probability that indicates the core data of actual sales. The application of the outliers’ test method helps to represent the difference of the mean of the sample and the most extreme data considering the standard deviation. The test detects one outlier at a time with different probabilities from a data set with an assumed normal distribution. Based on approximation data, the authors constructed a framework for scaling potential sales and estimating outliers with Grubbs’ test method. The suggested framework is applicable during the shortage event and recovery periods. The proposed framework has practical value and could be used for the minimization of the recovery period required after the shortage of event occurrence.

Keywords: drugs, Grubbs' test, outlier, shortage event

Procedia PDF Downloads 136
22996 The Development of Research Based Model to Enhance Critical Thinking, Cognitive Skills and Culture and Local Wisdom Knowledge of Undergraduate Students

Authors: Nithipattara Balsiri

Abstract:

The purposes of this research was to develop instructional model by using research-based learning enhancing critical thinking, cognitive skills, and culture and local wisdom knowledge of undergraduate students. The sample consisted of 307 undergraduate students. Critical thinking and cognitive skills test were employed for data collection. Second-order confirmatory factor analysis, t-test, and one-way analysis of variance were employed for data analysis using SPSS and LISREL programs. The major research results were as follows; 1) the instructional model by using research-based learning enhancing critical thinking, cognitive skills, and culture and local wisdom knowledge should be consists of 6 sequential steps, namely (1) the setting research problem (2) the setting research hypothesis (3) the data collection (4) the data analysis (5) the research result conclusion (6) the application for problem solving, and 2) after the treatment undergraduate students possessed a higher scores in critical thinking and cognitive skills than before treatment at the 0.05 level of significance.

Keywords: critical thinking, cognitive skills, culture and local wisdom knowledge

Procedia PDF Downloads 369
22995 A Case Study of Control of Blast-Induced Ground Vibration on Adjacent Structures

Authors: H. Mahdavinezhad, M. Labbaf, H. R. Tavakoli

Abstract:

In recent decades, the study and control of the destructive effects of explosive vibration in construction projects has received more attention, and several experimental equations in the field of vibration prediction as well as allowable vibration limit for various structures are presented. Researchers have developed a number of experimental equations to estimate the peak particle velocity (PPV), in which the experimental constants must be obtained at the site of the explosion by fitting the data from experimental explosions. In this study, the most important of these equations was evaluated for strong massive conglomerates around Dez Dam by collecting data on explosions, including 30 particle velocities, 27 displacements, 27 vibration frequencies and 27 acceleration of earth vibration at different distances; they were recorded in the form of two types of detonation systems, NUNEL and electric. Analysis showed that the data from the explosion had the best correlation with the cube root of the explosive, R2=0.8636, but overall the correlation coefficients are not much different. To estimate the vibration in this project, data regression was performed in the other formats, which resulted in the presentation of new equation with R2=0.904 correlation coefficient. Finally according to the importance of the studied structures in order to ensure maximum non damage to adjacent structures for each diagram, a range of application was defined so that for distances 0 to 70 meters from blast site, exponent n=0.33 and for distances more than 70 m, n =0.66 was suggested.

Keywords: blasting, blast-induced vibration, empirical equations, PPV, tunnel

Procedia PDF Downloads 133
22994 Possible Number of Dwelling Units Using Waste Plastic Bottle for Construction

Authors: Dibya Jivan Pati, Kazuhisa Iki, Riken Homma

Abstract:

Unlike other metro cities of India, Bhubaneswar–the capital city of Odisha, is expected to reach 1-million-mark population by now. The demands of dwelling unit requirement mostly among urban poor belonging to Economically Weaker section (EWS) and Low Income groups (LIG) is becoming a challenge due to high housing cost and rents. As a matter of fact, it’s also noted that, with increase in population, the solid waste generation also increases subsequently affecting the environment due to inefficiency in collection of waste by local government bodies. Methods of utilizing Solid Waste - especially in form of Plastic bottles, Glass bottles and Metal cans (PGM) are now widely used as an alternative material for construction of low-cost building by Non-Government Organizations (NGOs) in developing countries like India to help the urban poor afford a shelter. The application of disposed plastic bottle used in construction of single dwelling significantly reduces the overall cost of construction to as much as 14% compared to traditional construction material. Therefore, considering its cost-benefit result, it’s possible to provide housing to EWS and LIGs at an affordable price. In this paper, we estimated the quantity of plastic bottles generated in Bhubaneswar which further helped to estimate the possible number of single dwelling unit that can be constructed on yearly basis so as to refrain from further housing shortage. The estimation results will be practically used for planning and managing low-cost housing business by local government and NGOs.

Keywords: construction, dwelling unit, plastic bottle, solid waste generation, groups

Procedia PDF Downloads 477
22993 Development of a System for Fitting Clothes and Accessories Using Augmented Reality

Authors: Dinmukhamed T., Vassiliy S.

Abstract:

This article suggests the idea of fitting clothes and accessories based on augmented reality. A logical data model has been developed, taking into account the decision-making module (colors, style, type, material, popularity, etc.) based on personal data (age, gender, weight, height, leg size, hoist length, geolocation, photogrammetry, number of purchases of certain types of clothing, etc.) and statistical data of the purchase history (number of items, price, size, color, style, etc.). Also, in order to provide information to the user, it is planned to develop an augmented reality system using a QR code. This system of selection and fitting of clothing and accessories based on augmented reality will be used in stores to reduce the time for the buyer to make a decision on the choice of clothes.

Keywords: augmented reality, online store, decision-making module, like QR code, clothing store, queue

Procedia PDF Downloads 161
22992 Improving Student Programming Skills in Introductory Computer and Data Science Courses Using Generative AI

Authors: Genady Grabarnik, Serge Yaskolko

Abstract:

Generative Artificial Intelligence (AI) has significantly expanded its applicability with the incorporation of Large Language Models (LLMs) and become a technology with promise to automate some areas that were very difficult to automate before. The paper describes the introduction of generative Artificial Intelligence into Introductory Computer and Data Science courses and analysis of effect of such introduction. The generative Artificial Intelligence is incorporated in the educational process two-fold: For the instructors, we create templates of prompts for generation of tasks, and grading of the students work, including feedback on the submitted assignments. For the students, we introduce them to basic prompt engineering, which in turn will be used for generation of test cases based on description of the problems, generating code snippets for the single block complexity programming, and partitioning into such blocks of an average size complexity programming. The above-mentioned classes are run using Large Language Models, and feedback from instructors and students and courses’ outcomes are collected. The analysis shows statistically significant positive effect and preference of both stakeholders.

Keywords: introductory computer and data science education, generative AI, large language models, application of LLMS to computer and data science education

Procedia PDF Downloads 61
22991 Study of a Few Additional Posterior Projection Data to 180° Acquisition for Myocardial SPECT

Authors: Yasuyuki Takahashi, Hirotaka Shimada, Takao Kanzaki

Abstract:

A Dual-detector SPECT system is widely by use of myocardial SPECT studies. With 180-degree (180°) acquisition, reconstructed images are distorted in the posterior wall of myocardium due to the lack of sufficient data of posterior projection. We hypothesized that quality of myocardial SPECT images can be improved by the addition of data acquisition of only a few posterior projections to ordinary 180° acquisition. The proposed acquisition method (180° plus acquisition methods) uses the dual-detector SPECT system with a pair of detector arranged in 90° perpendicular. Sampling angle was 5°, and the acquisition range was 180° from 45° right anterior oblique to 45° left posterior oblique. After the acquisition of 180°, the detector moved to additional acquisition position of reverse side once for 2 projections, twice for 4 projections, or 3 times for 6 projections. Since these acquisition methods cannot be done in the present system, actual data acquisition was done by 360° with a sampling angle of 5°, and projection data corresponding to above acquisition position were extracted for reconstruction. We underwent the phantom studies and a clinical study. SPECT images were compared by profile curve analysis and also quantitatively by contrast ratio. The distortion was improved by 180° plus method. Profile curve analysis showed increased of cardiac cavity. Analysis with contrast ratio revealed that SPECT images of the phantoms and the clinical study were improved from 180° acquisition by the present methods. The difference in the contrast was not clearly recognized between 180° plus 2 projections, 180° plus 4 projections, and 180° plus 6 projections. 180° plus 2 projections method may be feasible for myocardial SPECT because distortion of the image and the contrast were improved.

Keywords: 180° plus acquisition method, a few posterior projections, dual-detector SPECT system, myocardial SPECT

Procedia PDF Downloads 297
22990 Re-identification Risk and Mitigation in Federated Learning: Human Activity Recognition Use Case

Authors: Besma Khalfoun

Abstract:

In many current Human Activity Recognition (HAR) applications, users' data is frequently shared and centrally stored by third parties, posing a significant privacy risk. This practice makes these entities attractive targets for extracting sensitive information about users, including their identity, health status, and location, thereby directly violating users' privacy. To tackle the issue of centralized data storage, a relatively recent paradigm known as federated learning has emerged. In this approach, users' raw data remains on their smartphones, where they train the HAR model locally. However, users still share updates of their local models originating from raw data. These updates are vulnerable to several attacks designed to extract sensitive information, such as determining whether a data sample is used in the training process, recovering the training data with inversion attacks, or inferring a specific attribute or property from the training data. In this paper, we first introduce PUR-Attack, a parameter-based user re-identification attack developed for HAR applications within a federated learning setting. It involves associating anonymous model updates (i.e., local models' weights or parameters) with the originating user's identity using background knowledge. PUR-Attack relies on a simple yet effective machine learning classifier and produces promising results. Specifically, we have found that by considering the weights of a given layer in a HAR model, we can uniquely re-identify users with an attack success rate of almost 100%. This result holds when considering a small attack training set and various data splitting strategies in the HAR model training. Thus, it is crucial to investigate protection methods to mitigate this privacy threat. Along this path, we propose SAFER, a privacy-preserving mechanism based on adaptive local differential privacy. Before sharing the model updates with the FL server, SAFER adds the optimal noise based on the re-identification risk assessment. Our approach can achieve a promising tradeoff between privacy, in terms of reducing re-identification risk, and utility, in terms of maintaining acceptable accuracy for the HAR model.

Keywords: federated learning, privacy risk assessment, re-identification risk, privacy preserving mechanisms, local differential privacy, human activity recognition

Procedia PDF Downloads 16
22989 Blockchain for Transport: Performance Simulations of Blockchain Network for Emission Monitoring Scenario

Authors: Dermot O'Brien, Vasileios Christaras, Georgios Fontaras, Igor Nai Fovino, Ioannis Kounelis

Abstract:

With the rise of the Internet of Things (IoT), 5G, and blockchain (BC) technologies, vehicles are becoming ever increasingly connected and are already transmitting substantial amounts of data to the original equipment manufacturers (OEMs) servers. This data could be used to help detect mileage fraud and enable more accurate vehicle emissions monitoring. This would not only help regulators but could enable applications such as permitting efficient drivers to pay less tax, geofencing for air quality improvement, as well as pollution tolling and trading platforms for transport-related businesses and EU citizens. Other applications could include traffic management and shared mobility systems. BC enables the transmission of data with additional security and removes single points of failure while maintaining data provenance, identity ownership, and the possibility to retain varying levels of privacy depending on the requirements of the applied use case. This research performs simulations of vehicles interacting with European member state authorities and European Commission BC nodes that are running hyperleger fabric and explores whether the technology is currently feasible for transport applications such as the emission monitoring use-case.

Keywords: future transportation systems, technological innovations, policy approaches for transportation future, economic and regulatory trends, blockchain

Procedia PDF Downloads 179
22988 DURAFILE: A Collaborative Tool for Preserving Digital Media Files

Authors: Santiago Macho, Miquel Montaner, Raivo Ruusalepp, Ferran Candela, Xavier Tarres, Rando Rostok

Abstract:

During our lives, we generate a lot of personal information such as photos, music, text documents and videos that link us with our past. This data that used to be tangible is now digital information stored in our computers, which implies a software dependence to make them accessible in the future. Technology, however, constantly evolves and goes through regular shifts, quickly rendering various file formats obsolete. The need for accessing data in the future affects not only personal users but also organizations. In a digital environment, a reliable preservation plan and the ability to adapt to fast changing technology are essential for maintaining data collections in the long term. We present in this paper the European FP7 project called DURAFILE that provides the technology to preserve media files for personal users and organizations while maintaining their quality.

Keywords: artificial intelligence, digital preservation, social search, digital preservation plans

Procedia PDF Downloads 447
22987 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 299
22986 Academic Leadership Succession Planning Practice in Nigeria Higher Education Institutions: A Case Study of Colleges of Education

Authors: Adie, Julius Undiukeye

Abstract:

This research investigated the practice of academic leadership succession planning in Nigerian higher education institutions, drawing on the lived experiences of the academic staff of the case study institutions. It is multi-case study research that adopts a qualitative research method. Ten participants (mainly academic staff) were used as the study sample. The study was guided by four research questions. Semi-structured interviews and archival information from official documents formed the sources of data. The data collected was analyzed using the Constant Comparative Technique (CCT) to generate empirical insights and facts on the subject of this paper. The following findings emerged from the data analysis: firstly, there was no formalized leadership succession plan in place in the institutions that were sampled for this study; secondly, despite the absence of a formal succession plan, the data indicates that academics believe that succession planning is very significant for institutional survival; thirdly, existing practices of succession planning in the sampled institutions, takes the forms of job seniority ranking, political process and executive fiat, ad-hoc arrangement, and external hiring; and finally, data revealed that there are some barriers to the practice of succession planning, such as traditional higher education institutions’ characteristics (e.g. external talent search, shared governance, diversity, and equality in leadership appointment) and the lack of interest in leadership positions. Based on the research findings, some far-reaching recommendations were made, including the urgent need for the ‘formalization’ of leadership succession planning by the higher education institutions concerned, through the design of an official policy framework.

Keywords: academic leadership, succession, planning, higher education

Procedia PDF Downloads 148
22985 Native Language Identification with Cross-Corpus Evaluation Using Social Media Data: ’Reddit’

Authors: Yasmeen Bassas, Sandra Kuebler, Allen Riddell

Abstract:

Native language identification is one of the growing subfields in natural language processing (NLP). The task of native language identification (NLI) is mainly concerned with predicting the native language of an author’s writing in a second language. In this paper, we investigate the performance of two types of features; content-based features vs. content independent features, when they are evaluated on a different corpus (using social media data “Reddit”). In this NLI task, the predefined models are trained on one corpus (TOEFL), and then the trained models are evaluated on different data using an external corpus (Reddit). Three classifiers are used in this task; the baseline, linear SVM, and logistic regression. Results show that content-based features are more accurate and robust than content independent ones when tested within the corpus and across corpus.

Keywords: NLI, NLP, content-based features, content independent features, social media corpus, ML

Procedia PDF Downloads 140
22984 Integration of Internet-Accessible Resources in the Field of Mobile Robots

Authors: B. Madhevan, R. Sakkaravarthi, R. Diya

Abstract:

The number and variety of mobile robot applications are increasing day by day, both in an industry and in our daily lives. First developed as a tool, nowadays mobile robots can be integrated as an entity in Internet-accessible resources. The present work is organized around four potential resources such as cloud computing, Internet of things, Big data analysis and Co-simulation. Further, the focus relies on integrating, analyzing and discussing the need for integrating Internet-accessible resources and the challenges deriving from such integration, and how these issues have been tackled. Hence, the research work investigates the concepts of the Internet-accessible resources from the aspect of the autonomous mobile robots with an overview of the performances of the currently available database systems. IaR is a world-wide network of interconnected objects, can be considered an evolutionary process in mobile robots. IaR constitutes an integral part of future Internet with data analysis, consisting of both physical and virtual things.

Keywords: internet-accessible resources, cloud computing, big data analysis, internet of things, mobile robot

Procedia PDF Downloads 392
22983 Assessment of Forage Utilization for Pasture-Based Livestock Production in Udubo Grazing Reserve, Bauchi State

Authors: Mustapha Saidu, Bilyaminu Mohammed

Abstract:

The study was conducted in Udubo Grazing Reserve between July 2019 and October 2019 to assess forage utilization for pasture-based livestock production in reserve. The grazing land was cross-divided into grids, where 15 coordinates were selected as the sample points. Grids of one-kilometer interval were made. The grids were systematically selected 1 grid after 7 grids. 1 × 1-meter quadrat was made at the coordinate of the selected grids for measurement, estimation, and sample collection. The results of the study indicated that Zornia glochidiatah has the highest percent of species composition (42%), while Mitracarpus hirtus has the lowest percent (0.1%). Urochloa mosambicensis has 48 percent of height removed and 27 percent used by weight, Zornia glochidiata 60 percent of height removed and 57 percent used by weight, Alysicapus veginalis has 55 percent of height removed, and 40 percent used by weight, and Cenchrus biflorus has 40 percent of height removed and 28 percent used by weight. The target is 50 percent utilization of forage by weight during a grazing period as well as at the end of the grazing season. The study found that Orochloa mosambicensis, Alysicarpus veginalis, and Cenchrus biflorus had lower percent by weight which is normal, while Zornia glochidiata had a higher percent by weight which is an indication of danger. The study recommends that the identification of key plant species in pasture and rangeland is critical to implementing a successful grazing management plan. There should be collective action and promotion of historically generated grazing knowledge through public and private advocacies.

Keywords: forage, grazing reserve, live stock, pasture, plant species

Procedia PDF Downloads 92
22982 The Application of Lean-Kaizen in Course Plan and Delivery in Malaysian Higher Education Sector

Authors: Nur Aishah Binti Awi, Zulfiqar Khan

Abstract:

Lean-kaizen has always been applied in manufacturing sector since many years ago. What about education sector? This paper discuss on how lean-kaizen can also be applied in education sector, specifically in academic area of Malaysian’s higher education sector. The purpose of this paper is to describe the application of lean kaizen in course plan and delivery. Lean-kaizen techniques have been used to identify waste in the course plan and delivery. A field study has been conducted to obtain the data. This study used both quantitative and qualitative data. The researcher had interviewed the chosen lecturers regarding to the problems of course plan and delivery that they encountered. Secondary data of students’ feedback at the end of semester also has been used to improve course plan and delivery. The result empirically shows that lean-kaizen helps to improve the course plan and delivery by reducing the wastes. Thus, this study demonstrates that lean-kaizen can also help education sector to improve their services as achieved by manufacturing sector.

Keywords: course delivery, education, Kaizen, lean

Procedia PDF Downloads 370
22981 Study on the Process of Detumbling Space Target by Laser

Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming

Abstract:

The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.

Keywords: detumbling, laser ablation drive, space target, space debris remove

Procedia PDF Downloads 87
22980 An ANN Approach for Detection and Localization of Fatigue Damage in Aircraft Structures

Authors: Reza Rezaeipour Honarmandzad

Abstract:

In this paper we propose an ANN for detection and localization of fatigue damage in aircraft structures. We used network of piezoelectric transducers for Lamb-wave measurements in order to calculate damage indices. Data gathered by the sensors was given to neural network classifier. A set of neural network electors of different architecture cooperates to achieve consensus concerning the state of each monitored path. Sensed signal variations in the ROI, detected by the networks at each path, were used to assess the state of the structure as well as to localize detected damage and to filter out ambient changes. The classifier has been extensively tested on large data sets acquired in the tests of specimens with artificially introduced notches as well as the results of numerous fatigue experiments. Effect of the classifier structure and test data used for training on the results was evaluated.

Keywords: ANN, fatigue damage, aircraft structures, piezoelectric transducers, lamb-wave measurements

Procedia PDF Downloads 422
22979 Public Libraries as Social Spaces for Vulnerable Populations

Authors: Natalie Malone

Abstract:

This study explores the role of a public library in the creation of social spaces for vulnerable populations. The data stems from a longitudinal ethnographic study of the Anderson Library community, which included field notes, artifacts, and interview data. Thematic analysis revealed multiple meanings and thematic relationships within and among the data sources -interviews, field notes, and artifacts. Initial analysis suggests the Anderson Library serves as a space for vulnerable populations, with the sub-themes of fostering interpersonal communication to create a social space for children and fostering interpersonal communication to create a social space for parents and adults. These findings are important as they illustrate the potential of public libraries to serve as community empowering institutions.

Keywords: capital, immigrant families, public libraries, space, vulnerable

Procedia PDF Downloads 156
22978 A Decadal Flood Assessment Using Time-Series Satellite Data in Cambodia

Authors: Nguyen-Thanh Son

Abstract:

Flood is among the most frequent and costliest natural hazards. The flood disasters especially affect the poor people in rural areas, who are heavily dependent on agriculture and have lower incomes. Cambodia is identified as one of the most climate-vulnerable countries in the world, ranked 13th out of 181 countries most affected by the impacts of climate change. Flood monitoring is thus a strategic priority at national and regional levels because policymakers need reliable spatial and temporal information on flood-prone areas to form successful monitoring programs to reduce possible impacts on the country’s economy and people’s likelihood. This study aims to develop methods for flood mapping and assessment from MODIS data in Cambodia. We processed the data for the period from 2000 to 2017, following three main steps: (1) data pre-processing to construct smooth time-series vegetation and water surface indices, (2) delineation of flood-prone areas, and (3) accuracy assessment. The results of flood mapping were verified with the ground reference data, indicating the overall accuracy of 88.7% and a Kappa coefficient of 0.77, respectively. These results were reaffirmed by close agreement between the flood-mapping area and ground reference data, with the correlation coefficient of determination (R²) of 0.94. The seasonally flooded areas observed for 2010, 2015, and 2016 were remarkably smaller than other years, mainly attributed to the El Niño weather phenomenon exacerbated by impacts of climate change. Eventually, although several sources potentially lowered the mapping accuracy of flood-prone areas, including image cloud contamination, mixed-pixel issues, and low-resolution bias between the mapping results and ground reference data, our methods indicated the satisfactory results for delineating spatiotemporal evolutions of floods. The results in the form of quantitative information on spatiotemporal flood distributions could be beneficial to policymakers in evaluating their management strategies for mitigating the negative effects of floods on agriculture and people’s likelihood in the country.

Keywords: MODIS, flood, mapping, Cambodia

Procedia PDF Downloads 129
22977 Optimizing Stormwater Sampling Design for Estimation of Pollutant Loads

Authors: Raja Umer Sajjad, Chang Hee Lee

Abstract:

Stormwater runoff is the leading contributor to pollution of receiving waters. In response, an efficient stormwater monitoring program is required to quantify and eventually reduce stormwater pollution. The overall goals of stormwater monitoring programs primarily include the identification of high-risk dischargers and the development of total maximum daily loads (TMDLs). The challenge in developing better monitoring program is to reduce the variability in flux estimates due to sampling errors; however, the success of monitoring program mainly depends on the accuracy of the estimates. Apart from sampling errors, manpower and budgetary constraints also influence the quality of the estimates. This study attempted to develop optimum stormwater monitoring design considering both cost and the quality of the estimated pollutants flux. Three years stormwater monitoring data (2012 – 2014) from a mix land use located within Geumhak watershed South Korea was evaluated. The regional climate is humid and precipitation is usually well distributed through the year. The investigation of a large number of water quality parameters is time-consuming and resource intensive. In order to identify a suite of easy-to-measure parameters to act as a surrogate, Principal Component Analysis (PCA) was applied. Means, standard deviations, coefficient of variation (CV) and other simple statistics were performed using multivariate statistical analysis software SPSS 22.0. The implication of sampling time on monitoring results, number of samples required during the storm event and impact of seasonal first flush were also identified. Based on the observations derived from the PCA biplot and the correlation matrix, total suspended solids (TSS) was identified as a potential surrogate for turbidity, total phosphorus and for heavy metals like lead, chromium, and copper whereas, Chemical Oxygen Demand (COD) was identified as surrogate for organic matter. The CV among different monitored water quality parameters were found higher (ranged from 3.8 to 15.5). It suggests that use of grab sampling design to estimate the mass emission rates in the study area can lead to errors due to large variability. TSS discharge load calculation error was found only 2 % with two different sample size approaches; i.e. 17 samples per storm event and equally distributed 6 samples per storm event. Both seasonal first flush and event first flush phenomena for most water quality parameters were observed in the study area. Samples taken at the initial stage of storm event generally overestimate the mass emissions; however, it was found that collecting a grab sample after initial hour of storm event more closely approximates the mean concentration of the event. It was concluded that site and regional climate specific interventions can be made to optimize the stormwater monitoring program in order to make it more effective and economical.

Keywords: first flush, pollutant load, stormwater monitoring, surrogate parameters

Procedia PDF Downloads 242
22976 Identity and Mental Adaptation of Deaf and Hard-of-Hearing Students

Authors: N. F. Mikhailova, M. E. Fattakhova, M. A. Mironova, E. V. Vyacheslavova

Abstract:

For the mental and social adaptation of the deaf and hard-of-hearing people, cultural and social aspects - the formation of identity (acculturation) and educational conditions – are highly significant. We studied 137 deaf and hard-of-hearing students in different educational situations. We used these methods: Big Five (Costa & McCrae, 1997), TRF (Becker, 1989), WCQ (Lazarus & Folkman, 1988), self-esteem, and coping strategies (Jambor & Elliott, 2005), self-stigma scale (Mikhailov, 2008). Type of self-identification of students depended on the degree of deafness, type of education, method of communication in the family: large hearing loss, education in schools for deaf, and gesture communication increased the likelihood of a 'deaf' acculturation. Less hearing loss, inclusive education in public school or school for the hearing-impaired, mixed communication in the family contributed to the formation of 'hearing' acculturation. The choice of specific coping depended on the degree of deafness: a large hearing loss increased coping 'withdrawal into the deaf world' and decreased 'bicultural skills' coping. People with mild hearing loss tended to cover-up it. In the context of ongoing discussion, we researched personality characteristics in deaf and hard on-hearing students, coping and other deafness associated factors depending on their acculturation type. Students who identified themselves with the 'hearing world' had a high self-esteem, a higher level of extraversion, self-awareness, personal resources, willingness to cooperate, better psychological health, emotional stability, higher ability to empathy, a greater satiety of life with feelings and sense and high sense of self-worth. They also actively used strategies, problem-solving, acceptance of responsibility, positive revaluation. Student who limited themselves within the culture of deaf people had more severe hearing loss and accordingly had more communication barriers. Lack of use or seldom use of coping strategies by these students point at decreased level of stress in their life. Their self-esteem have not been challenged in the specific social environment of the students with the same severity of defect, and thus this environment provided sense of comfort (we can assume that from the high scores on psychological health, personality resources, and emotional stability). Students with bicultural acculturation had higher level of psychological resources - they used Positive Reappraisal coping more often and had a higher level of psychological health. Lack of belonging to certain culture (marginality) leads to personality disintegration, social and psychological disadaptation: deaf and hard-of-hearing students with marginal identification had a lower self-estimation level, worse psychological health and personal resources, lower level of extroversion, self-confidence and life satisfaction. They, in fact, become 'risk group' (many of them dropped out of universities, divorced, and one even ended up in the ranks of ISIS). All these data argue the importance of cultural 'anchor' for people with hearing deprivation. Supported by the RFBR No 19-013-00406.

Keywords: acculturation, coping, deafness, marginality

Procedia PDF Downloads 207
22975 Data Mining of Students' Performance Using Artificial Neural Network: Turkish Students as a Case Study

Authors: Samuel Nii Tackie, Oyebade K. Oyedotun, Ebenezer O. Olaniyi, Adnan Khashman

Abstract:

Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task; and the performances obtained from these networks evaluated in consideration of achieved recognition rates and training time.

Keywords: artificial neural network, data mining, classification, students’ evaluation

Procedia PDF Downloads 617