Search results for: small data sets
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29040

Search results for: small data sets

27690 Efficacy of a Social-Emotional Learning Curriculum for Kindergarten and First Grade Students to Improve Social Adjustment within the School Culture

Authors: Ann P. Daunic, Nancy Corbett

Abstract:

Background and Significance: Researchers emphasize the role that motivation, self-esteem, and self-regulation play in children’s early adjustment to the school culture, including skills such as identifying their own feelings and understanding the feelings of others. As social-emotional growth, academic learning, and successful integration within culture and society are inextricably connected, the Social-Emotional Learning Foundations (SELF) curriculum was designed to integrate social-emotional learning (SEL) instruction within early literacy instruction (specifically, reading) for Kindergarten and first-grade students at risk for emotional and behavioral difficulties. Storybook reading is a typically occurring activity in the primary grades; thus SELF provides an intervention that is both theoretically and practically sound. Methodology: The researchers will report on findings from the first two years of a three-year study funded by the US Department of Education’s Institute of Education Sciences to evaluate the effects of the SELF curriculum versus “business as usual” (BAU). SELF promotes the development of self-regulation by incorporating instructional strategies that support children’s use of SEL related vocabulary, self-talk, and critical thinking. The curriculum consists of a carefully coordinated set of materials and pedagogy designed specifically for primary grade children at early risk for emotional and behavioral difficulties. SELF lessons (approximately 50 at each grade level) are organized around 17 SEL topics within five critical competencies. SELF combines whole-group (the first in each topic) and small-group lessons (the 2nd and 3rd in each topic) to maximize opportunities for teacher modeling and language interactions. The researchers hypothesize that SELF offers a feasible and substantial opportunity within the classroom setting to provide a small-group social-emotional learning intervention integrated with K-1 literacy-related instruction. Participating target students (N = 876) were identified by their teachers as potentially at risk for emotional or behavioral issues. These students were selected from 122 Kindergarten and 100 first grade classrooms across diverse school districts in a southern state in the US. To measure the effectiveness of the SELF intervention, the researchers asked teachers to complete assessments related to social-emotional learning and adjustment to the school culture. A social-emotional learning related vocabulary assessment was administered directly to target students receiving small-group instruction. Data were analyzed using a 3-level MANOVA model with full information maximum likelihood to estimate coefficients and test hypotheses. Major Findings: SELF had significant positive effects on vocabulary, knowledge, and skills associated with social-emotional competencies, as evidenced by results from the measures administered. Effect sizes ranged from 0.41 for group (SELF vs. BAU) differences in vocabulary development to 0.68 for group differences in SEL related knowledge. Conclusion: Findings from two years of data collection indicate that SELF improved outcomes related to social-emotional learning and adjustment to the school culture. This study thus supports the integration of SEL with literacy instruction as a feasible and effective strategy to improve outcomes for K-1 students at risk for emotional and behavioral difficulties.

Keywords: Socio-cultural context for learning, social-emotional learning, social skills, vocabulary development

Procedia PDF Downloads 124
27689 Teaching of Entrepreneurship and Innovation in Brazilian Universities

Authors: Marcelo T. Okano, Oduvaldo Vendrametto, Osmildo S. Santos, Marcelo E. Fernandes, Heide Landi

Abstract:

Teaching of entrepreneurship and innovation in Brazilian universities has increased in recent years due to several factors such as the emergence of disciplines like biotechnology increased globalization reduced basic funding and new perspectives on the role of the university in the system of knowledge production Innovation is increasingly seen as an evolutionary process that involves different institutional spheres or sectors in society Entrepreneurship is a milestone on the road towards economic progress, and makes a huge contribution towards the quality and future hopes of a sector, economy or even a country. Entrepreneurship is as important in small and medium-sized enterprises (SMEs) and local markets as in large companies, and national and international markets, and is just as key a consideration for public companies as or private organizations. Entrepreneurship helps to encourage the competition in the current environment that leads to the effects of globalization. There is an increasing tendency for government policy to promote entrepreneurship for its apparent economic benefit. Accordingly, governments seek to employ entrepreneurship education as a means to stimulate increased levels of economic activity. Entrepreneurship education and training (EET) is growing rapidly in universities and colleges throughout the world, and governments are supporting it both directly and through funding major investments in advice-provision to would-be entrepreneurs and existing small businesses. The Triple Helix of university–industry–government relations is compared with alternative models for explaining the current research system in its social contexts. Communications and negotiations between institutional partners generate an overlay that increasingly reorganizes the underlying arrangements. To achieve the objective of this research was a survey of the literature on the entrepreneurship and innovation and then a field research with 100 students of Fatec. To collect the data needed for analysis, we used the exploratory research of a qualitative nature. We asked to respondents what degree of knowledge over ten related to entrepreneurship and innovation topics, responses were answered in a Likert scale with 4 levels, none, small, medium and large. We can conclude that the terms such as entrepreneurship and innovation are known by most students because the university propagates them across disciplines, lectures, and institutes innovation. The more specific items such as canvas and Design thinking model are unknown by most respondents. The importance of the University in teaching innovation and entrepreneurship in the transmission of this knowledge to the students in order to equalize the knowledge. As a future project, these items will be re-evaluated to create indicators for measuring the knowledge level.

Keywords: Brazilian universities, entrepreneurship, innovation, entrepreneurship, globalization

Procedia PDF Downloads 507
27688 Improving the Statistics Nature in Research Information System

Authors: Rajbir Cheema

Abstract:

In order to introduce an integrated research information system, this will provide scientific institutions with the necessary information on research activities and research results in assured quality. Since data collection, duplication, missing values, incorrect formatting, inconsistencies, etc. can arise in the collection of research data in different research information systems, which can have a wide range of negative effects on data quality, the subject of data quality should be treated with better results. This paper examines the data quality problems in research information systems and presents the new techniques that enable organizations to improve their quality of research information.

Keywords: Research information systems (RIS), research information, heterogeneous sources, data quality, data cleansing, science system, standardization

Procedia PDF Downloads 155
27687 Cumulus-Oocyte Complexes and Follicular Fluid Proteins of Pig during Folliculogenesis

Authors: Panomporn Wisuthseriwong, Hatairuk Tungkasen, Siyaporn Namsongsan, Chanikarn Srinark, Mayuva Youngsabanant-Areekijseree

Abstract:

The objective of the present study was to evaluate the morphology of porcine cumulus-oocyte complexes (pCOCs) and follicular fluid during follicular development. The samples were obtained from local slaughterhouses in Nakorn Pathom Province, Thailand. Pigs were classified as either in the follicular phase or luteal phase. Porcine follicles (n = 3,510) were categorized as small (1-3 mm in diameters; n=2,910), medium (4-6 mm in diameters; n=530) and large (7-8 mm in diameters; n=70). Then pCOCs and follicular fluid were collected. Finally, we found that the oocytes can be categorized into intact cumulus cells layer oocyte, multi-cumulus cells layer oocyte, partial cumulus cells layer oocyte, completely denuded oocyte and degenerated oocyte. They showed high percentage of intact and multi-cumulus cells layer oocytes from small follicles (54.68%) medium follicles (69.06%) and large follicles (68.57%), which have high potential to develop into matured oocytes in vitro. Protein composition of the follicular fluid was separated by SDS-PAGE technique. The result shows that the protein molecular weight in the small and medium follicles are 23, 50, 66, 75, 92, 100, 132, 163, 225 and >225 kDa. Meanwhile, protein molecular weight in large follicles are 12, 16, 23, 50, 66, 75, 92, 100, 132, 163, 225 and >225 kDa. All proteins play an important role in promotion and regulation on development, maturation of oocytes and regulation of ovulation. We conclude that the results of discovery can be used porcine secretion proteins for supplement in IVM/IVF technology. Acknowledgements: The project was funded by a grant from Silpakorn University Research and Development Institute (SURDI) and Faculty of Science, Silpakorn University, Thailand.

Keywords: porcine follicles, porcine oocyte, follicular fluid, SDS-PAGE

Procedia PDF Downloads 256
27686 A Predictive Machine Learning Model of the Survival of Female-led and Co-Led Small and Medium Enterprises in the UK

Authors: Mais Khader, Xingjie Wei

Abstract:

This research sheds light on female entrepreneurs by providing new insights on the survival predictions of companies led by females in the UK. This study aims to build a predictive machine learning model of the survival of female-led & co-led small & medium enterprises (SMEs) in the UK over the period 2000-2020. The predictive model built utilised a combination of financial and non-financial features related to both companies and their directors to predict SMEs' survival. These features were studied in terms of their contribution to the resultant predictive model. Five machine learning models are used in the modelling: Decision tree, AdaBoost, Naïve Bayes, Logistic regression and SVM. The AdaBoost model had the highest performance of the five models, with an accuracy of 73% and an AUC of 80%. The results show high feature importance in predicting companies' survival for company size, management experience, financial performance, industry, region, and females' percentage in management.

Keywords: company survival, entrepreneurship, females, machine learning, SMEs

Procedia PDF Downloads 99
27685 Investigating the Effectiveness of Multilingual NLP Models for Sentiment Analysis

Authors: Othmane Touri, Sanaa El Filali, El Habib Benlahmar

Abstract:

Natural Language Processing (NLP) has gained significant attention lately. It has proved its ability to analyze and extract insights from unstructured text data in various languages. It is found that one of the most popular NLP applications is sentiment analysis which aims to identify the sentiment expressed in a piece of text, such as positive, negative, or neutral, in multiple languages. While there are several multilingual NLP models available for sentiment analysis, there is a need to investigate their effectiveness in different contexts and applications. In this study, we aim to investigate the effectiveness of different multilingual NLP models for sentiment analysis on a dataset of online product reviews in multiple languages. The performance of several NLP models, including Google Cloud Natural Language API, Microsoft Azure Cognitive Services, Amazon Comprehend, Stanford CoreNLP, spaCy, and Hugging Face Transformers are being compared. The models based on several metrics, including accuracy, precision, recall, and F1 score, are being evaluated and compared to their performance across different categories of product reviews. In order to run the study, preprocessing of the dataset has been performed by cleaning and tokenizing the text data in multiple languages. Then training and testing each model has been applied using a cross-validation approach where randomly dividing the dataset into training and testing sets and repeating the process multiple times has been used. A grid search approach to optimize the hyperparameters of each model and select the best-performing model for each category of product reviews and language has been applied. The findings of this study provide insights into the effectiveness of different multilingual NLP models for Multilingual Sentiment Analysis and their suitability for different languages and applications. The strengths and limitations of each model were identified, and recommendations for selecting the most performant model based on the specific requirements of a project were provided. This study contributes to the advancement of research methods in multilingual NLP and provides a practical guide for researchers and practitioners in the field.

Keywords: NLP, multilingual, sentiment analysis, texts

Procedia PDF Downloads 99
27684 Effect of the Aluminum Fraction “X” on the Laser Wavelengths in GaAs/AlxGa1-xAs Superlattices

Authors: F.Bendahma, S.Bentata

Abstract:

In this paper, we study numerically the eigenstates existing in a GaAs/AlxGa1-xAs superlattice with structural disorder in trimer height barrier (THB). Aluminium concentration x takes at random two different values, one of them appears only in triply and remains inferior to the second in the studied structure. In spite of the presence of disorder, the system exhibits two kinds of sets of propagating states lying below the barrier due to the characteristic structure of the superlattice. This result allows us to note the existence of a single laser emission in trimer and wavelengths are obtained in the mid-infrared.

Keywords: infrared (IR), laser emission, superlattice, trimer

Procedia PDF Downloads 447
27683 A Low Phase Noise CMOS LC Oscillator with Tail Current-Shaping

Authors: Amir Mahdavi

Abstract:

In this paper, a circuit topology of voltage-controlled oscillators (VCO) which is suitable for ultra-low-phase noise operations is introduced. To do so, a new low phase noise cross-coupled oscillator by using the general topology of cross-coupled oscillator and adding a differential stage for tail current shaping is designed. In addition, a tail current shaping technique to improve phase noise in differential LC VCOs is presented. The tail current becomes large when the oscillator output voltage arrives at the maximum or minimum value and when the sensitivity of the output phase to the noise is the smallest. Also, the tail current becomes small when the phase noise sensitivity is large. The proposed circuit does not use extra power and extra noisy active devices. Furthermore, this topology occupies small area. Simulation results show the improvement in phase noise by 2.5dB under the same conditions and at the carrier frequency of 1 GHz for GSM applications. The power consumption of the proposed circuit is 2.44 mW and the figure of merit (FOM) with -192.2 dBc/Hz is achieved for the new oscillator.

Keywords: LC oscillator, low phase noise, current shaping, diff mode

Procedia PDF Downloads 598
27682 A Value-Oriented Metamodel for Small and Medium Enterprises’ Decision Making

Authors: Romain Ben Taleb, Aurélie Montarnal, Matthieu Lauras, Mathieu Dahan, Romain Miclo

Abstract:

To be competitive and sustainable, any company has to maximize its value. However, unlike listed companies that can assess their values based on market shares, most Small and Medium Enterprises (SMEs) which are non-listed cannot have direct and live access to this critical information. Traditional accounting reports only give limited insights to SME decision-makers about the real impact of their day-to-day decisions on the company’s performance and value. Most of the time, an SME’s financial valuation is made one time a year as the associated process is time and resource-consuming, requiring several months and external expertise to be completed. To solve this issue, we propose in this paper a value-oriented metamodel that enables real-time and dynamic assessment of the SME’s value based on the large definition of their assets. These assets cover a wider scope of resources of the company and better account for immaterial assets. The proposal, which is illustrated in a case study, discusses the benefits of incorporating assets in the SME valuation.

Keywords: SME, metamodel, decision support system, financial valuation, assets

Procedia PDF Downloads 91
27681 A Method of Detecting the Difference in Two States of Brain Using Statistical Analysis of EEG Raw Data

Authors: Digvijaysingh S. Bana, Kiran R. Trivedi

Abstract:

This paper introduces various methods for the alpha wave to detect the difference between two states of brain. One healthy subject participated in the experiment. EEG was measured on the forehead above the eye (FP1 Position) with reference and ground electrode are on the ear clip. The data samples are obtained in the form of EEG raw data. The time duration of reading is of one minute. Various test are being performed on the alpha band EEG raw data.The readings are performed in different time duration of the entire day. The statistical analysis is being carried out on the EEG sample data in the form of various tests.

Keywords: electroencephalogram(EEG), biometrics, authentication, EEG raw data

Procedia PDF Downloads 461
27680 Linearization of Y-Force Equation of Rigid Body Equation of Motion and Behavior of Fighter Aircraft under Imbalance Weight on Wings during Combat

Authors: Jawad Zakir, Syed Irtiza Ali Shah, Rana Shaharyar, Sidra Mahmood

Abstract:

Y-force equation comprises aerodynamic forces, drag and side force with side slip angle β and weight component along with the coupled roll (φ) and pitch angles (θ). This research deals with the linearization of Y-force equation using Small Disturbance theory assuming equilibrium flight conditions for different state variables of aircraft. By using assumptions of Small Disturbance theory in non-linear Y-force equation, finally reached at linearized lateral rigid body equation of motion; which says that in linearized Y-force equation, the lateral acceleration is dependent on the other different aerodynamic and propulsive forces like vertical tail, change in roll rate (Δp) from equilibrium, change in yaw rate (Δr) from equilibrium, change in lateral velocity due to side force, drag and side force components due to side slip, and the lateral equation from coupled rotating frame to decoupled rotating frame. This paper describes implementation of this lateral linearized equation for aircraft control systems. Another significant parameter considered on which y-force equation depends is ‘c’ which shows that any change bought in the weight of aircrafts wing will cause Δφ and cause lateral force i.e. Y_c. This simplification also leads to lateral static and dynamic stability. The linearization of equations is required because much of mathematics control system design for aircraft is based on linear equations. This technique is simple and eases the linearization of the rigid body equations of motion without using any high-speed computers.

Keywords: Y-force linearization, small disturbance theory, side slip, aerodynamic force drag, lateral rigid body equation of motion

Procedia PDF Downloads 493
27679 Innovate, Educate, and Transform, Tailoring Sustainable Waste Handling Solutions for Nepal’s Small Populated Municipalities: Insights From Chandragiri Municipality

Authors: Anil Kumar Baral

Abstract:

The research introduces a ground-breaking approach to waste management, emphasizing innovation, education, and transformation. Using Chandragiri Municipality as a case study, the study advocates a shift from traditional to progressive waste management strategies, contributing an inventive waste framework, sustainability advocacy, and a transformative blueprint. The waste composition analysis highlights Chandragiri's representative profile, leading to a comprehensive plan addressing challenges and recommending a transition to a profitable waste treatment model, supported by relevant statistics. The data-driven approach incorporates the official data of waste Composition from Chandragiri Municipality as secondary data and incorporates the primary data from Chandragiri households, ensuring a nuanced perspective. Discussions on implementation, viability, and environmental preservation underscore the dual benefit of sustainability. The study includes a comparative analysis, monitoring, and evaluation framework, examining international relevance and collaboration, and conducting a social and environmental impact assessment. The results indicate the necessity for creative changes in Chandragiri's waste practices, recommending separate treatment centers in wards level rather than Municipal level, composting machines, and a centralized waste treatment plant. Educational reforms involve revising school curricula and awareness campaigns. The transformation's success hinges on reducing waste size, efficient treatment center operation, and ongoing public literacy. The conclusion summarizes key findings, envisioning a future with sustainable waste management practices deeply embedded in the community fabric.

Keywords: innovate, educate, transform, municipality, method

Procedia PDF Downloads 45
27678 European Food Safety Authority (EFSA) Safety Assessment of Food Additives: Data and Methodology Used for the Assessment of Dietary Exposure for Different European Countries and Population Groups

Authors: Petra Gergelova, Sofia Ioannidou, Davide Arcella, Alexandra Tard, Polly E. Boon, Oliver Lindtner, Christina Tlustos, Jean-Charles Leblanc

Abstract:

Objectives: To assess chronic dietary exposure to food additives in different European countries and population groups. Method and Design: The European Food Safety Authority’s (EFSA) Panel on Food Additives and Nutrient Sources added to Food (ANS) estimates chronic dietary exposure to food additives with the purpose of re-evaluating food additives that were previously authorized in Europe. For this, EFSA uses concentration values (usage and/or analytical occurrence data) reported through regular public calls for data by food industry and European countries. These are combined, at individual level, with national food consumption data from the EFSA Comprehensive European Food Consumption Database including data from 33 dietary surveys from 19 European countries and considering six different population groups (infants, toddlers, children, adolescents, adults and the elderly). EFSA ANS Panel estimates dietary exposure for each individual in the EFSA Comprehensive Database by combining the occurrence levels per food group with their corresponding consumption amount per kg body weight. An individual average exposure per day is calculated, resulting in distributions of individual exposures per survey and population group. Based on these distributions, the average and 95th percentile of exposure is calculated per survey and per population group. Dietary exposure is assessed based on two different sets of data: (a) Maximum permitted levels (MPLs) of use set down in the EU legislation (defined as regulatory maximum level exposure assessment scenario) and (b) usage levels and/or analytical occurrence data (defined as refined exposure assessment scenario). The refined exposure assessment scenario is sub-divided into the brand-loyal consumer scenario and the non-brand-loyal consumer scenario. For the brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the highest reported usage/analytical level for one food group, and at the mean level for the remaining food groups. For the non-brand-loyal consumer scenario, the consumer is considered to be exposed on long-term basis to the mean reported usage/analytical level for all food groups. An additional exposure from sources other than direct addition of food additives (i.e. natural presence, contaminants, and carriers of food additives) is also estimated, as appropriate. Results: Since 2014, this methodology has been applied in about 30 food additive exposure assessments conducted as part of scientific opinions of the EFSA ANS Panel. For example, under the non-brand-loyal scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 5.9 and 8.7 mg/kg body weight/day, respectively. The same estimates under the brand-loyal scenario in toddlers resulted in exposures of 8.1 and 20.7 mg/kg body weight/day, respectively. For the regulatory maximum level exposure assessment scenario, the highest 95th percentile of exposure to α-tocopherol (E 307) and ammonium phosphatides (E 442) was estimated in toddlers up to 11.9 and 30.3 mg/kg body weight/day, respectively. Conclusions: Detailed and up-to-date information on food additive concentration values (usage and/or analytical occurrence data) and food consumption data enable the assessment of chronic dietary exposure to food additives to more realistic levels.

Keywords: α-tocopherol, ammonium phosphatides, dietary exposure assessment, European Food Safety Authority, food additives, food consumption data

Procedia PDF Downloads 325
27677 Nitriding of Super-Ferritic Stainless Steel by Plasma Immersion Ion Implantation in Radio Frequency and Microwave Plasma System

Authors: H. Bhuyan, S. Mändl, M. Favre, M. Cisternas, A. Henriquez, E. Wyndham, M. Walczak, D. Manova

Abstract:

The 470 Li-24 Cr and 460Li-21 Cr are two alloys belonging to the next generation of super-ferritic nickel free stainless steel grades, containing titanium (Ti), niobium (Nb) and small percentage of carbon (C) and nitrogen (N). The addition of Ti and Nb improves in general the corrosion resistance while the low interstitial content of C and N assures finer precipitates and greater ductility compared to conventional ferritic grades. These grades are considered an economic alternative to AISI 316L and 304 due to comparable or superior corrosion. However, since 316L and 304 can be nitrided to improve the mechanical surface properties like hardness and wear; it is hypothesize that the tribological properties of these super-ferritic stainless steels grades can also be improved by plasma nitriding. Thus two sets of plasma immersion ion implantation experiments have been carried out, one with a high pressure capacitively coupled radio frequency plasma at PUC Chile and the other using a low pressure microwave plasma at IOM Leipzig, in order to explore further improvements in the mechanical properties of 470 Li-24 Cr and 460Li-21 Cr steel. Nitrided and unnitrided substrates have been subsequently investigated using different surface characterization techniques including secondary ion mass spectroscopy, scanning electron microscopy, energy dispersive x-ray analysis, Vickers hardness, wear resistance, as well as corrosion test. In most of the characterizations no major differences have been observed for nitrided 470 Li-24 Cr and 460Li-21 Cr. Due to the ion bombardment, an increase in the surface roughness is observed for higher treatment temperature, independent of the steel types. The formation of chromium nitride compound takes place only at a treatment temperature around 4000C-4500C, or above. However, corrosion properties deteriorate after treatment at higher temperatures. The physical characterization results show up to 25 at.% of nitrogen for a diffusion zone of 4-6 m, and a 4-5 times increase in hardness for different experimental conditions. The samples implanted with temperature higher than 400 °C presented a wear resistance around two orders of magnitude higher than the untreated substrates. The hardness is apparently affected by the different roughness of the samples and their different profile of nitrogen.

Keywords: ion implantation, plasma, RF and microwave plasma, stainless steel

Procedia PDF Downloads 463
27676 Digital Immunity System for Healthcare Data Security

Authors: Nihar Bheda

Abstract:

Protecting digital assets such as networks, systems, and data from advanced cyber threats is the aim of Digital Immunity Systems (DIS), which are a subset of cybersecurity. With features like continuous monitoring, coordinated reactions, and long-term adaptation, DIS seeks to mimic biological immunity. This minimizes downtime by automatically identifying and eliminating threats. Traditional security measures, such as firewalls and antivirus software, are insufficient for enterprises, such as healthcare providers, given the rapid evolution of cyber threats. The number of medical record breaches that have occurred in recent years is proof that attackers are finding healthcare data to be an increasingly valuable target. However, obstacles to enhancing security include outdated systems, financial limitations, and a lack of knowledge. DIS is an advancement in cyber defenses designed specifically for healthcare settings. Protection akin to an "immune system" is produced by core capabilities such as anomaly detection, access controls, and policy enforcement. Coordination of responses across IT infrastructure to contain attacks is made possible by automation and orchestration. Massive amounts of data are analyzed by AI and machine learning to find new threats. After an incident, self-healing enables services to resume quickly. The implementation of DIS is consistent with the healthcare industry's urgent requirement for resilient data security in light of evolving risks and strict guidelines. With resilient systems, it can help organizations lower business risk, minimize the effects of breaches, and preserve patient care continuity. DIS will be essential for protecting a variety of environments, including cloud computing and the Internet of medical devices, as healthcare providers quickly adopt new technologies. DIS lowers traditional security overhead for IT departments and offers automated protection, even though it requires an initial investment. In the near future, DIS may prove to be essential for small clinics, blood banks, imaging centers, large hospitals, and other healthcare organizations. Cyber resilience can become attainable for the whole healthcare ecosystem with customized DIS implementations.

Keywords: digital immunity system, cybersecurity, healthcare data, emerging technology

Procedia PDF Downloads 66
27675 Estimating Knowledge Flow Patterns of Business Method Patents with a Hidden Markov Model

Authors: Yoonjung An, Yongtae Park

Abstract:

Knowledge flows are a critical source of faster technological progress and stouter economic growth. Knowledge flows have been accelerated dramatically with the establishment of a patent system in which each patent is required by law to disclose sufficient technical information for the invention to be recreated. Patent analysis, thus, has been widely used to help investigate technological knowledge flows. However, the existing research is limited in terms of both subject and approach. Particularly, in most of the previous studies, business method (BM) patents were not covered although they are important drivers of knowledge flows as other patents. In addition, these studies usually focus on the static analysis of knowledge flows. Some use approaches that incorporate the time dimension, yet they still fail to trace a true dynamic process of knowledge flows. Therefore, we investigate dynamic patterns of knowledge flows driven by BM patents using a Hidden Markov Model (HMM). An HMM is a popular statistical tool for modeling a wide range of time series data, with no general theoretical limit in regard to statistical pattern classification. Accordingly, it enables characterizing knowledge patterns that may differ by patent, sector, country and so on. We run the model in sets of backward citations and forward citations to compare the patterns of knowledge utilization and knowledge dissemination.

Keywords: business method patents, dynamic pattern, Hidden-Markov Model, knowledge flow

Procedia PDF Downloads 327
27674 Modeling of a Small Unmanned Aerial Vehicle

Authors: Ahmed Elsayed Ahmed, Ashraf Hafez, A. N. Ouda, Hossam Eldin Hussein Ahmed, Hala Mohamed ABD-Elkader

Abstract:

Unmanned Aircraft Systems (UAS) are playing increasingly prominent roles in defense programs and defense strategies around the world. Technology advancements have enabled the development of it to do many excellent jobs as reconnaissance, surveillance, battle fighters, and communications relays. Simulating a small unmanned aerial vehicle (SUAV) dynamics and analyzing its behavior at the preflight stage is too important and more efficient. The first step in the UAV design is the mathematical modeling of the nonlinear equations of motion. In this paper, a survey with a standard method to obtain the full non-linear equations of motion is utilized,and then the linearization of the equations according to a steady state flight condition (trimming) is derived. This modeling technique is applied to an Ultrastick-25e fixed wing UAV to obtain the valued linear longitudinal and lateral models. At the end, the model is checked by matching between the behavior of the states of the non-linear UAV and the resulted linear model with doublet at the control surfaces.

Keywords: UAV, equations of motion, modeling, linearization

Procedia PDF Downloads 740
27673 Effect of Cloud Computing on Enterprises

Authors: Amir Rashid

Abstract:

Today is the world of innovations where everyone is looking for a change. Organizations are now looking toward virtualization in order to minimize their computing cost. Cloud Computing has also introduced itself by the means of reducing computing cost. It offers different approach to make computing better by improving utilization and reducing infrastructure and administrative costs. Cloud Computing is basically the amalgamation of Utility Computing and SaaS (Software as a Service). Cloud Computing is quite new to organizations as it is still at its deploying stage. Due to this reason, organizations are not confident whether to adopt it or not. This thesis investigates the problem for organization concerning the security and cost issues. Benefits and drawbacks are being highlighted which organizations can have or suffer in order to adopt Cloud Computing. In Conclusion, Cloud Computing is a better option available for small and medium organizations with a comparison to large companies both in terms of data security and cost.

Keywords: cloud computing, security, cost, elasticity, PaaS, IaaS, SaaS

Procedia PDF Downloads 339
27672 Computation of ΔV Requirements for Space Debris Removal Using Orbital Transfer

Authors: Sadhvi Gupta, Charulatha S.

Abstract:

Since the dawn of the early 1950s humans have launched numerous vehicles in space. Be it from rockets to rovers humans have done tremendous growth in the technology sector. While there is mostly upside for it for humans the only major downside which cannot be ignored now is the amount of junk produced in space due to it i.e. space debris. All this space junk amounts from objects we launch from earth which so remains in orbit until it re-enters the atmosphere. Space debris can be of various sizes mainly the big ones are of the dead satellites floating in space and small ones can consist of various things like paint flecks, screwdrivers, bolts etc. Tracking of small space debris whose size is less than 10 cm is impossible and can have vast implications. As the amount of space debris increases in space the chances of it hitting a functional satellite also increases. And it is extremely costly to repair or recover the satellite once hit by a revolving space debris. So the proposed solution is, Actively removing space debris while keeping space sustainability in mind. For this solution a total of 8 modules will be launched in LEO and in GEO and these models will be placed in their desired orbits through Hohmann transfer and for that calculating ΔV values is crucial. After which the modules will be placed in their designated positions in STK software and thorough analysis is conducted.

Keywords: space debris, Hohmann transfer, STK, delta-V

Procedia PDF Downloads 85
27671 Airborne Molecular Contamination in Clean Room Environment

Authors: T. Rajamäki

Abstract:

In clean room environment molecular contamination in very small concentrations can cause significant harm for the components and processes. This is commonly referred as airborne molecular contamination (AMC). There is a shortage of high sensitivity continuous measurement data for existence and behavior of several of these contaminants. Accordingly, in most cases correlation between concentration of harmful molecules and their effect on processes is not known. In addition, the formation and distribution of contaminating molecules are unclear. In this work sensitive optical techniques are applied in clean room facilities for investigation of concentrations, forming mechanisms and effects of contaminating molecules. Special emphasis is on reactive acid and base gases ammonia (NH3) and hydrogen fluoride (HF). They are the key chemicals in several operations taking place in clean room processes.

Keywords: AMC, clean room, concentration, reactive gas

Procedia PDF Downloads 281
27670 Framework for Integrating Big Data and Thick Data: Understanding Customers Better

Authors: Nikita Valluri, Vatcharaporn Esichaikul

Abstract:

With the popularity of data-driven decision making on the rise, this study focuses on providing an alternative outlook towards the process of decision-making. Combining quantitative and qualitative methods rooted in the social sciences, an integrated framework is presented with a focus on delivering a much more robust and efficient approach towards the concept of data-driven decision-making with respect to not only Big data but also 'Thick data', a new form of qualitative data. In support of this, an example from the retail sector has been illustrated where the framework is put into action to yield insights and leverage business intelligence. An interpretive approach to analyze findings from both kinds of quantitative and qualitative data has been used to glean insights. Using traditional Point-of-sale data as well as an understanding of customer psychographics and preferences, techniques of data mining along with qualitative methods (such as grounded theory, ethnomethodology, etc.) are applied. This study’s final goal is to establish the framework as a basis for providing a holistic solution encompassing both the Big and Thick aspects of any business need. The proposed framework is a modified enhancement in lieu of traditional data-driven decision-making approach, which is mainly dependent on quantitative data for decision-making.

Keywords: big data, customer behavior, customer experience, data mining, qualitative methods, quantitative methods, thick data

Procedia PDF Downloads 161
27669 Managing the Transition from Voluntary to Mandatory Climate Reporting: The Role of Carbon Accounting

Authors: Qingliang Tang

Abstract:

The transition from voluntary to mandatory carbon reporting (also refers to climate reporting) poses serious challenges for accounting professionals aiming to support firms in achieving net-zero goals. The accounting literature addresses the topics that are currently bewildering accounting academics and professional accountants on how to make accounting as a useful tool for the management to achieve a carbon neutral business model. This paper explores the evolving role of carbon accounting within corporate financial reporting systems, emphasizing its integration as a crucial component. Key challenges addressed include data availability, climate risk assessment, defining reporting boundaries, selecting appropriate greenhouse gas (GHG) accounting methodologies, and integrating climate-related events into traditional financial statements. A dynamic, integrated carbon accounting framework is proposed to facilitate this transformative process effectively. Furthermore, the paper identifies critical knowledge gaps and sets forth a research agenda aimed at enhancing transparency and relevance in carbon accounting and reporting systems, thereby empowering informed decision-making. The purpose of the paper is to succinctly capture the essence of carbon accounting practice in the transitional period, focusing on the challenges, proposed solutions, and future research directions in the realm of carbon accounting and mandatory climate reporting.

Keywords: mandatory carbon reporting, carbon management, net zero target, sustainability, climate risks

Procedia PDF Downloads 15
27668 Some Results on the Generalized Higher Rank Numerical Ranges

Authors: Mohsen Zahraei

Abstract:

‎In this paper, ‎the notion of ‎rank-k numerical range of rectangular complex matrix polynomials‎ ‎are introduced. ‎Some algebraic and geometrical properties are investigated. ‎Moreover, ‎for ε>0 the notion of Birkhoff-James approximate orthogonality sets for ε-higher ‎rank numerical ranges of rectangular matrix polynomials is also introduced and studied. ‎The proposed definitions yield a natural generalization of the standard higher rank numerical ranges.

Keywords: ‎‎Rank-k numerical range‎, ‎isometry‎, ‎numerical range‎, ‎rectangular matrix polynomials

Procedia PDF Downloads 457
27667 Approach for Demonstrating Reliability Targets for Rail Transport during Low Mileage Accumulation in the Field: Methodology and Case Study

Authors: Nipun Manirajan, Heeralal Gargama, Sushil Guhe, Manoj Prabhakaran

Abstract:

In railway industry, train sets are designed based on contractual requirements (mission profile), where reliability targets are measured in terms of mean distance between failures (MDBF). However, during the beginning of revenue services, trains do not achieve the designed mission profile distance (mileage) within the timeframe due to infrastructure constraints, scarcity of commuters or other operational challenges thereby not respecting the original design inputs. Since trains do not run sufficiently and do not achieve the designed mileage within the specified time, car builder has a risk of not achieving the contractual MDBF target. This paper proposes a constant failure rate based model to deal with the situations where mileage accumulation is not a part of the design mission profile. The model provides appropriate MDBF target to be demonstrated based on actual accumulated mileage. A case study of rolling stock running in the field is undertaken to analyze the failure data and MDBF target demonstration during low mileage accumulation. The results of case study prove that with the proposed method, reliability targets are achieved under low mileage accumulation.

Keywords: mean distance between failures, mileage-based reliability, reliability target appropriations, rolling stock reliability

Procedia PDF Downloads 265
27666 Resistance of Haemonchus spp. to Albendazole, Fenbendazole and Levamisole in 4 Goat Farms of Antioquia, Colombia

Authors: Jose D. Zapata-Torres, Esteban Naranjo-Gutiérrez, Angela M. Martínez-Valencia, Jenny J. Chaparro-Gutiérrez, David Villar-Argaiz

Abstract:

Reports of drug resistance have been made in every livestock host and to every anthelmintic class. In some regions of world, the extremely high prevalence of multi-drug resistance in nematodes of sheep and goats threatens the viability of small-ruminant industries. In the region of Antioquia, Colombia, no reports of nematode resistance have been documented due to a lack of veterinary diagnostic laboratories. The objective of this study was to evaluate the efficacy of albendazole, fenbendazole, and levamisole to control gastrointestinal nematodes in goat farms of Antioquia by doing fecal egg count reduction tests. A total of 139 crossbreed goats from four separate farms were sampled for feces prior to, and 14 days following anthelmintc treatments. Individual fecal egg counts were performed using the modified three chamber McMaster technique. The anthelmintics administered at day 0 were albendazole (farm 1, n=63), fenbendazole (farm 2, n=20), and levamisole (farm 3 and 4, n= 37, and 19). Larval cultures were used to identify the genus of nematodes using Baermann`s technique and the morphological keys for identification of L3 in small ruminants. There was no difference in fecal egg counts between 0 and 14, with means (±SD) of 1681,5 ± 2121,5 and 1715,12 ± 1895,4 epg (eggs per gram), respectively. The egg count reductions for each anthelmintic and farm were 25,86% for albendazole (farm 1), 0% for fenbendazole (farm 2), 0% (farm 3), and 5,5% (farm 4) for levamisole. The genus of nematodes identified was predominantly Haemonchus spp., with 70,27% and 82,81% for samples from day 0 and 14, respectively. These results provide evidence of a total state of resistance to 3 common anthelmintics. Further research is needed to design integrate management programs to control nematodes in small ruminants in Colombia.

Keywords: anthelmintics, goat, haemonchus, resistance

Procedia PDF Downloads 528
27665 Incremental Learning of Independent Topic Analysis

Authors: Takahiro Nishigaki, Katsumi Nitta, Takashi Onoda

Abstract:

In this paper, we present a method of applying Independent Topic Analysis (ITA) to increasing the number of document data. The number of document data has been increasing since the spread of the Internet. ITA was presented as one method to analyze the document data. ITA is a method for extracting the independent topics from the document data by using the Independent Component Analysis (ICA). ICA is a technique in the signal processing; however, it is difficult to apply the ITA to increasing number of document data. Because ITA must use the all document data so temporal and spatial cost is very high. Therefore, we present Incremental ITA which extracts the independent topics from increasing number of document data. Incremental ITA is a method of updating the independent topics when the document data is added after extracted the independent topics from a just previous the data. In addition, Incremental ITA updates the independent topics when the document data is added. And we show the result applied Incremental ITA to benchmark datasets.

Keywords: text mining, topic extraction, independent, incremental, independent component analysis

Procedia PDF Downloads 307
27664 Contemporary Terrorism: Root Causes and Misconceptions

Authors: Thomas Slunecko Karat

Abstract:

The years since 9/11 2001 have given us a plethora of research papers with the word ‘terrorism’ in the title. Yet only a small subset of these papers has produced new data, which explains why more than 20 years of research since 9/11 have done little to increase our understanding of the mechanisms that lead to terrorism. Specifically, terrorism scholars are divided by political, temporal, geographical and financial demarcation lines which prevent a clear definition of terrorism. As a consequence, the true root causes of terrorism remain unexamined. Instead, the psychopathological conditions of the individual have been emphasized despite ample empirical evidence pointing in a different direction. This paper examines the underlying reasons and motives that prevent open discourse about the root causes of terrorism and proposes that terrorism is linked to the current international system of resource allocation and systematic violations of human rights.

Keywords: terrorism, root causes of terrorism, prevention of terrorism, racism, human rights violations

Procedia PDF Downloads 90
27663 Open Data for e-Governance: Case Study of Bangladesh

Authors: Sami Kabir, Sadek Hossain Khoka

Abstract:

Open Government Data (OGD) refers to all data produced by government which are accessible in reusable way by common people with access to Internet and at free of cost. In line with “Digital Bangladesh” vision of Bangladesh government, the concept of open data has been gaining momentum in the country. Opening all government data in digital and customizable format from single platform can enhance e-governance which will make government more transparent to the people. This paper presents a well-in-progress case study on OGD portal by Bangladesh Government in order to link decentralized data. The initiative is intended to facilitate e-service towards citizens through this one-stop web portal. The paper further discusses ways of collecting data in digital format from relevant agencies with a view to making it publicly available through this single point of access. Further, possible layout of this web portal is presented.

Keywords: e-governance, one-stop web portal, open government data, reusable data, web of data

Procedia PDF Downloads 354
27662 Multiresolution Mesh Blending for Surface Detail Reconstruction

Authors: Honorio Salmeron Valdivieso, Andy Keane, David Toal

Abstract:

In the area of mechanical reverse engineering, processes often encounter difficulties capturing small, highly localized surface information. This could be the case if a physical turbine was 3D scanned for lifecycle management or robust design purposes, with interest on eroded areas or scratched coating. The limitation partly is due to insufficient automated frameworks for handling -localized - surface information during the reverse engineering pipeline. We have developed a tool for blending surface patches with arbitrary irregularities into a base body (e.g. a CAD solid). The approach aims to transfer small surface features while preserving their shape and relative placement by using a multi-resolution scheme and rigid deformations. Automating this process enables the inclusion of outsourced surface information in CAD models, including samples prepared in mesh handling software, or raw scan information discarded in the early stages of reverse engineering reconstruction.

Keywords: application lifecycle management, multiresolution deformation, reverse engineering, robust design, surface blending

Procedia PDF Downloads 138
27661 Improved Small-Signal Characteristics of Infrared 850 nm Top-Emitting Vertical-Cavity Lasers

Authors: Ahmad Al-Omari, Osama Khreis, Ahmad M. K. Dagamseh, Abdullah Ababneh, Kevin Lear

Abstract:

High-speed infrared vertical-cavity surface-emitting laser diodes (VCSELs) with Cu-plated heat sinks were fabricated and tested. VCSELs with 10 mm aperture diameter and 4 mm of electroplated copper demonstrated a -3dB modulation bandwidth (f-3dB) of 14 GHz and a resonance frequency (fR) of 9.5 GHz at a bias current density (Jbias) of only 4.3 kA/cm2, which corresponds to an improved f-3dB2/Jbias ratio of 44 GHz2/kA/cm2. At higher and lower bias current densities, the f-3dB2/ Jbias ratio decreased to about 30 GHz2/kA/cm2 and 18 GHz2/kA/cm2, respectively. Examination of the analogue modulation response demonstrated that the presented VCSELs displayed a steady f-3dB/ fR ratio of 1.41±10% over the whole range of the bias current (1.3Ith to 6.2Ith). The devices also demonstrated a maximum modulation bandwidth (f-3dB max) of more than 16 GHz at a bias current less than the industrial bias current standard for reliability by 25%.

Keywords: current density, high-speed VCSELs, modulation bandwidth, small-signal characteristics, thermal impedance, vertical-cavity surface-emitting lasers

Procedia PDF Downloads 568