Search results for: real excess portfolio returns
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6240

Search results for: real excess portfolio returns

1680 Development of a Web-Based Application for Intelligent Fertilizer Management in Rice Cultivation

Authors: Hao-Wei Fu, Chung-Feng Kao

Abstract:

In the era of rapid technological advancement, information technology (IT) has become integral to modern life, exerting significant influence across diverse sectors and serving as a catalyst for development in various industries. Within agriculture, the integration of IT offers substantial benefits, notably enhancing operational efficiency. Real-time monitoring systems, for instance, have been widely embraced in agriculture, effectively improving crop management practices. This study specifically addresses the management of rice panicle fertilizer, presenting the development of a web application tailored to handle data associated with rice panicle fertilizer management. Leveraging the normalized difference red edge index, this application optimizes the quantity of rice panicle fertilizer used, providing recommendations to agricultural stakeholders and service providers in the agricultural information sector. The overarching objective is to minimize costs while maximizing yields. Furthermore, a robust database system has been established to store and manage relevant data for future reference in rice cultivation management. Additionally, the study utilizes the Representational State Transfer software architectural style to construct an application programming interface (API), facilitating data creation, retrieval, updating, and deletion for users via the HyperText Transfer Protocol methods. Future plans involve integrating this API with third-party services to incorporate it into larger frameworks, thus catering to the diverse requirements of various third-party services.

Keywords: application programming interface, HyperText Transfer Protocol, nitrogen fertilizer intelligent management, web-based application

Procedia PDF Downloads 62
1679 Challenges of Management of Acute Pancreatitis in Low Resource Setting

Authors: Md. Shakhawat Hossain, Jimma Hossain, Md. Naushad Ali

Abstract:

Acute pancreatitis is a dangerous medical emergency in the practice of gastroenterology. Management of acute pancreatitis needs multidisciplinary approach with support starts from emergency to ICU. So, there is a chance of mismanagement in every steps, especially in low resource settings. Other factors such as patient’s financial condition, education, social custom, transport facility, referral system from periphery may also challenge the current guidelines for management. The present study is intended to determine the clinico-pathological profile, severity assessment and challenges of management of acute pancreatitis in a government laid tertiary care hospital to image the real scenario of management in a low resource place. A total 100 patients of acute pancreatitis were studied in this prospective study, held in the Department of Gastroenterology, Rangpur medical college hospital, Bangladesh from July 2017 to July 2018 within one year. Regarding severity, 85 % of the patients were mild, whereas 13 were moderately severe, and 2 had severe acute pancreatitis according to the revised Atlanta criteria. The most common etiologies of acute pancreatitis in our study were gall stone (15%) and biliary sludge (15%), whereas 54% were idiopathic. The most common challenges we faced were delay in hospital admission (59%) and delay in hospital diagnosis (20%). Others are non-adherence of patient party, and lack of investigation facility, physician’s poor knowledge about current guidelines. We were able to give early aggressive fluid to only 18% of patients as per current guideline. Conclusion: Management of acute pancreatitis as per guideline is challenging when optimum facility is lacking. So, modified guidelines for assessment and management of acute pancreatitis should be prepared for low resource setting.

Keywords: acute pancreatitis, challenges of management, severity, prognosis

Procedia PDF Downloads 132
1678 Consolidation Behavior of Lebanese Soil and Its Correlation with the Soil Parameters

Authors: Robert G. Nini

Abstract:

Soil consolidation is one of the biggest problem facing engineers. The consolidation process has an important role in settlement analysis for the embankments and footings resting on clayey soils. The settlement amount is related to the compression and the swelling indexes of the soil. Because the predominant upper soil layer in Lebanon is consisting mainly of clay, this layer is a real challenge for structural and highway engineering. To determine the effect of load and drainage on the engineering consolidation characteristics of Lebanese soil, a full experimental and synthesis study was conducted on different soil samples collected from many locations. This study consists of two parts. During the first part which is an experimental one, the Proctor test and the consolidation test were performed on the collected soil samples. After it, the identifications soil tests as hydrometer, specific gravity and Atterberg limits are done. The consolidation test which is the main test in this research is done by loading the soil for some days then an unloading cycle was applied. It takes two weeks to complete a typical consolidation test. Because of these reasons, during the second part of our research which is based on the analysis of the experiments results, some correlations were found between the main consolidation parameters as compression and swelling indexes with the other soil parameters easy to calculate. The results show that the compression and swelling indexes of Lebanese clays may be roughly estimated using a model involving one or two variables in the form of the natural void ratio and the Atterberg limits. These correlations have increasing importance for site engineers, and the proposed model also seems to be applicable to a wide range of clays worldwide.

Keywords: atterberg limits, clay, compression and swelling indexes, settlement, soil consolidation

Procedia PDF Downloads 138
1677 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 259
1676 Competing Risks Modeling Using within Node Homogeneity Classification Tree

Authors: Kazeem Adesina Dauda, Waheed Babatunde Yahya

Abstract:

To design a tree that maximizes within-node homogeneity, there is a need for a homogeneity measure that is appropriate for event history data with multiple risks. We consider the use of Deviance and Modified Cox-Snell residuals as a measure of impurity in Classification Regression Tree (CART) and compare our results with the results of Fiona (2008) in which homogeneity measures were based on Martingale Residual. Data structure approach was used to validate the performance of our proposed techniques via simulation and real life data. The results of univariate competing risk revealed that: using Deviance and Cox-Snell residuals as a response in within node homogeneity classification tree perform better than using other residuals irrespective of performance techniques. Bone marrow transplant data and double-blinded randomized clinical trial, conducted in other to compare two treatments for patients with prostate cancer were used to demonstrate the efficiency of our proposed method vis-à-vis the existing ones. Results from empirical studies of the bone marrow transplant data showed that the proposed model with Cox-Snell residual (Deviance=16.6498) performs better than both the Martingale residual (deviance=160.3592) and Deviance residual (Deviance=556.8822) in both event of interest and competing risks. Additionally, results from prostate cancer also reveal the performance of proposed model over the existing one in both causes, interestingly, Cox-Snell residual (MSE=0.01783563) outfit both the Martingale residual (MSE=0.1853148) and Deviance residual (MSE=0.8043366). Moreover, these results validate those obtained from the Monte-Carlo studies.

Keywords: within-node homogeneity, Martingale residual, modified Cox-Snell residual, classification and regression tree

Procedia PDF Downloads 273
1675 Culture, Consumption, and Markets of Aesthetics: A10-Year Literature Review

Authors: Chin-Hsiang Chu

Abstract:

This article review the literature in the field among the marketing and aesthetics, the current market and customer-oriented product sales, and gradually from the practical functionality, transformed into the visual appearance of the concept note and the importance of marketing experience substance 'economic Aesthetics' trend. How to introduce the concept of aesthetic and differentiate products have become an important content of marketing management in for an organization in marketing.In previous studies,marketing aesthetic related researches are rare.Therefore, the purpose of this study to explore the connection between aesthetics and marketing of the market economy, and aggregated content through literature review, trying to find related research implications for the management of marketing aesthetics, market-oriented and customer value and development of the product. In this study, the problem statement and background, the development of the theory of evolution, as well as methods and results of discovery stage, literature review was conducted to explore. The results found: (1) Study of Aesthetics will help deepen the shopping environment and service environment commonly understood. (2) the perceived value of products imported aesthetic, consumer willingness to buy, and even premium products will be more attractive. (3) marketing personnel for general marketing management with a high degree of aesthetic identity. (4) management in marketing aesthetics connotation, aesthetic characteristics of five elements is greatly valued by the real-time, complex, specificity, attract sexual and richness. (5) allows consumers to experience through the process due to stimulate the senses, the mind and thinking with the corporate brand or have a deeper link. Results of this study can be used as business in a competitive market, new product development and design of the guide.

Keywords: marketing aesthetics, aesthetics economic, aesthetic, experiential marketing

Procedia PDF Downloads 260
1674 Laser Registration and Supervisory Control of neuroArm Robotic Surgical System

Authors: Hamidreza Hoshyarmanesh, Hosein Madieh, Sanju Lama, Yaser Maddahi, Garnette R. Sutherland, Kourosh Zareinia

Abstract:

This paper illustrates the concept of an algorithm to register specified markers on the neuroArm surgical manipulators, an image-guided MR-compatible tele-operated robot for microsurgery and stereotaxy. Two range-finding algorithms, namely time-of-flight and phase-shift, are evaluated for registration and supervisory control. The time-of-flight approach is implemented in a semi-field experiment to determine the precise position of a tiny retro-reflective moving object. The moving object simulates a surgical tool tip. The tool is a target that would be connected to the neuroArm end-effector during surgery inside the magnet bore of the MR imaging system. In order to apply flight approach, a 905-nm pulsed laser diode and an avalanche photodiode are utilized as the transmitter and receiver, respectively. For the experiment, a high frequency time to digital converter was designed using a field-programmable gate arrays. In the phase-shift approach, a continuous green laser beam with a wavelength of 530 nm was used as the transmitter. Results showed that a positioning error of 0.1 mm occurred when the scanner-target point distance was set in the range of 2.5 to 3 meters. The effectiveness of this non-contact approach exhibited that the method could be employed as an alternative for conventional mechanical registration arm. Furthermore, the approach is not limited by physical contact and extension of joint angles.

Keywords: 3D laser scanner, intraoperative MR imaging, neuroArm, real time registration, robot-assisted surgery, supervisory control

Procedia PDF Downloads 287
1673 Educational Sustainability: Teaching the Next Generation of Educators in Medical Simulation

Authors: Thomas Trouton, Sebastian Tanner, Manvir Sandher

Abstract:

The use of simulation in undergraduate and postgraduate medical curricula is ever-growing, is a useful addition to the traditional apprenticeship model of learning within medical education, and better prepares graduates for the team-based approach to healthcare seen in real-life clinical practice. As a learning tool, however, undergraduate medical students often have little understanding of the theory behind the use of medical simulation and have little experience in planning and delivering their own simulated teaching sessions. We designed and implemented a student-selected component (SSC) as part of the undergraduate medical curriculum at the University of Buckingham Medical School to introduce students to the concepts behind the use of medical simulation in education and allow them to plan and deliver their own simulated medical scenario to their peers. The SSC took place over a 2-week period in the 3rd year of the undergraduate course. There was a mix of lectures, seminars and interactive group work sessions, as well as hands-on experience in the simulation suite, to introduce key concepts related to medical simulation, including technical considerations in simulation, human factors, debriefing and troubleshooting scenarios. We evaluated the success of our SSC using “Net Promotor Scores” (NPS) to assess students’ confidence in planning and facilitating a simulation-based teaching session, as well as leading a debrief session. In all three domains, we showed an increase in the confidence of the students. We also showed an increase in confidence in the management of common medical emergencies as a result of the SSC. Overall, the students who chose our SSC had the opportunity to learn new skills in medical education, with a particular focus on the use of simulation-based teaching, and feedback highlighted that a number of students would take these skills forward in their own practice. We demonstrated an increase in confidence in several domains related to the use of medical simulation in education and have hopefully inspired a new generation of medical educators.

Keywords: simulation, SSC, teaching, medical students

Procedia PDF Downloads 125
1672 Critically Sampled Hybrid Trigonometry Generalized Discrete Fourier Transform for Multistandard Receiver Platform

Authors: Temidayo Otunniyi

Abstract:

This paper presents a low computational channelization algorithm for the multi-standards platform using poly phase implementation of a critically sampled hybrid Trigonometry generalized Discrete Fourier Transform, (HGDFT). An HGDFT channelization algorithm exploits the orthogonality of two trigonometry Fourier functions, together with the properties of Quadrature Mirror Filter Bank (QMFB) and Exponential Modulated filter Bank (EMFB), respectively. HGDFT shows improvement in its implementation in terms of high reconfigurability, lower filter length, parallelism, and medium computational activities. Type 1 and type 111 poly phase structures are derived for real-valued HGDFT modulation. The design specifications are decimated critically and over-sampled for both single and multi standards receiver platforms. Evaluating the performance of oversampled single standard receiver channels, the HGDFT algorithm achieved 40% complexity reduction, compared to 34% and 38% reduction in the Discrete Fourier Transform (DFT) and tree quadrature mirror filter (TQMF) algorithm. The parallel generalized discrete Fourier transform (PGDFT) and recombined generalized discrete Fourier transform (RGDFT) had 41% complexity reduction and HGDFT had a 46% reduction in oversampling multi-standards mode. While in the critically sampled multi-standard receiver channels, HGDFT had complexity reduction of 70% while both PGDFT and RGDFT had a 34% reduction.

Keywords: software defined radio, channelization, critical sample rate, over-sample rate

Procedia PDF Downloads 150
1671 Trends in Domestic Terms of Trade of Agricultural Sector of Pakistan

Authors: Anwar Hussain, Muhammad Iqbal

Abstract:

The changes in the prices of the agriculture commodities combined with changes in population and agriculture productivity affect farmers’ profitability and standard of living. This study intends to estimate various domestic terms of trade for agriculture sector and also to assess the volatility in the standard of living and profitability of farmers. The terms of trade has been estimated for Pakistan and its provinces using producer prices indices, consumer price indices, input prices indices and quantity indices using the data for the period 1990-91 to 2008-09. The domestic terms of trade of agriculture sector has been improved in terms of both approaches i.e. the ratio of producer prices indices to consumer prices indices and the real per capita income approach. However, the cross province estimates indicated that the terms of trade also improved for Khyber Pakhtunkhwa, Sindh and Punjab while Balochistan’s domestic terms of trade deteriorated drastically. In other words the standard of living of the farmers in Pakistan and its provinces except Balochistan improved. Using the input prices, the domestic terms of trade deteriorated for Pakistan as a whole and its provinces as well. This also explores that as a whole the profitability of the farmers reduced during the study period. The farmers pay more prices for inputs as compared to they receive for their produce. This further indicates that the poverty at the gross root level has been increased. Further, summing, the standard of living of the farmers improved but their profitability reduced, which indicates that the farmers do not completely rely on the farm income but also utilize some other sources of income for their livelihood. The study supports to give subsidies on farm inputs so as to improve the profitability of the farmers.

Keywords: agricultural terms of trade, farmers’ profitability, farmers’ standard of living, consumer and producer price indices, quantity indices

Procedia PDF Downloads 466
1670 Simulation and Fabrication of Plasmonic Lens for Bacteria Detection

Authors: Sangwoo Oh, Jaewoo Kim, Dongmin Seo, Jaewon Park, Yongha Hwang, Sungkyu Seo

Abstract:

Plasmonics has been regarded one of the most powerful bio-sensing modalities to evaluate bio-molecular interactions in real-time. However, most of the plasmonic sensing methods are based on labeling metallic nanoparticles, e.g. gold or silver, as optical modulation markers, which are non-recyclable and expensive. This plasmonic modulation can be usually achieved through various nano structures, e.g., nano-hole arrays. Among those structures, plasmonic lens has been regarded as a unique plasmonic structure due to its light focusing characteristics. In this study, we introduce a custom designed plasmonic lens array for bio-sensing, which was simulated by finite-difference-time-domain (FDTD) approach and fabricated by top-down approach. In our work, we performed the FDTD simulations of various plasmonic lens designs for bacteria sensor, i.e., Samonella and Hominis. We optimized the design parameters, i.e., radius, shape, and material, of the plasmonic lens. The simulation results showed the change in the peak intensity value with the introduction of each bacteria and antigen i.e., peak intensity 1.8711 a.u. with the introduction of antibody layer of thickness of 15nm. For Salmonella, the peak intensity changed from 1.8711 a.u. to 2.3654 a.u. and for Hominis, the peak intensity changed from 1.8711 a.u. to 3.2355 a.u. This significant shift in the intensity due to the interaction between bacteria and antigen showed a promising sensing capability of the plasmonic lens. With the batch processing and bulk production of this nano scale design, the cost of biological sensing can be significantly reduced, holding great promise in the fields of clinical diagnostics and bio-defense.

Keywords: plasmonic lens, FDTD, fabrication, bacteria sensor, salmonella, hominis

Procedia PDF Downloads 270
1669 An MIPSSTWM-based Emergency Vehicle Routing Approach for Quick Response to Highway Incidents

Authors: Siliang Luan, Zhongtai Jiang

Abstract:

The risk of highway incidents is commonly recognized as a major concern for transportation authorities due to the hazardous consequences and negative influence. It is crucial to respond to these unpredictable events as soon as possible faced by emergency management decision makers. In this paper, we focus on path planning for emergency vehicles, one of the most significant processes to avoid congestion and reduce rescue time. A Mixed-Integer Linear Programming with Semi-Soft Time Windows Model (MIPSSTWM) is conducted to plan an optimal routing respectively considering the time consumption of arcs and nodes of the urban road network and the highway network, especially in developing countries with an enormous population. Here, the arcs indicate the road segments and the nodes include the intersections of the urban road network and the on-ramp and off-ramp of the highway networks. An attempt in this research has been made to develop a comprehensive and executive strategy for emergency vehicle routing in heavy traffic conditions. The proposed Cuckoo Search (CS) algorithm is designed by imitating obligate brood parasitic behaviors of cuckoos and Lévy Flights (LF) to solve this hard and combinatorial problem. Using a Chinese city as our case study, the numerical results demonstrate the approach we applied in this paper outperforms the previous method without considering the nodes of the road network for a real-world situation. Meanwhile, the accuracy and validity of the CS algorithm also show better performances than the traditional algorithm.

Keywords: emergency vehicle, path planning, cs algorithm, urban traffic management and urban planning

Procedia PDF Downloads 82
1668 Analyzing the Results of Buildings Energy Audit by Using Grey Set Theory

Authors: Tooraj Karimi, Mohammadreza Sadeghi Moghadam

Abstract:

Grey set theory has the advantage of using fewer data to analyze many factors, and it is therefore more appropriate for system study rather than traditional statistical regression which require massive data, normal distribution in the data and few variant factors. So, in this paper grey clustering and entropy of coefficient vector of grey evaluations are used to analyze energy consumption in buildings of the Oil Ministry in Tehran. In fact, this article intends to analyze the results of energy audit reports and defines most favorable characteristics of system, which is energy consumption of buildings, and most favorable factors affecting these characteristics in order to modify and improve them. According to the results of the model, ‘the real Building Load Coefficient’ has been selected as the most important system characteristic and ‘uncontrolled area of the building’ has been diagnosed as the most favorable factor which has the greatest effect on energy consumption of building. Grey clustering in this study has been used for two purposes: First, all the variables of building relate to energy audit cluster in two main groups of indicators and the number of variables is reduced. Second, grey clustering with variable weights has been used to classify all buildings in three categories named ‘no standard deviation’, ‘low standard deviation’ and ‘non- standard’. Entropy of coefficient vector of Grey evaluations is calculated to investigate greyness of results. It shows that among the 38 buildings surveyed in terms of energy consumption, 3 cases are in standard group, 24 cases are in ‘low standard deviation’ group and 11 buildings are completely non-standard. In addition, clustering greyness of 13 buildings is less than 0.5 and average uncertainly of clustering results is 66%.

Keywords: energy audit, grey set theory, grey incidence matrixes, grey clustering, Iran oil ministry

Procedia PDF Downloads 374
1667 Evaluation of Easy-to-Use Energy Building Design Tools for Solar Access Analysis in Urban Contexts: Comparison of Friendly Simulation Design Tools for Architectural Practice in the Early Design Stage

Authors: M. Iommi, G. Losco

Abstract:

Current building sector is focused on reduction of energy requirements, on renewable energy generation and on regeneration of existing urban areas. These targets need to be solved with a systemic approach, considering several aspects simultaneously such as climate conditions, lighting conditions, solar radiation, PV potential, etc. The solar access analysis is an already known method to analyze the solar potentials, but in current years, simulation tools have provided more effective opportunities to perform this type of analysis, in particular in the early design stage. Nowadays, the study of the solar access is related to the easiness of the use of simulation tools, in rapid and easy way, during the design process. This study presents a comparison of three simulation tools, from the point of view of the user, with the aim to highlight differences in the easy-to-use of these tools. Using a real urban context as case study, three tools; Ecotect, Townscope and Heliodon, are tested, performing models and simulations and examining the capabilities and output results of solar access analysis. The evaluation of the ease-to-use of these tools is based on some detected parameters and features, such as the types of simulation, requirements of input data, types of results, etc. As a result, a framework is provided in which features and capabilities of each tool are shown. This framework shows the differences among these tools about functions, features and capabilities. The aim of this study is to support users and to improve the integration of simulation tools for solar access with the design process.

Keywords: energy building design tools, solar access analysis, solar potential, urban planning

Procedia PDF Downloads 342
1666 Maximizing the Role of Companion Teachers for the Achievement of Professional Competencies and Pedagogics Workshop Activities of Teacher Professional Participants in the Faculty of Teaching and Education of Mulawarman University

Authors: Makrina Tindangen

Abstract:

The problems faced by participants of teacher profession program in Faculty of teaching and education Mulawarman University is professional and pedagogic competence. Professional competence related to the mastery of teaching materials, while pedagogic competence related with the ability to plan and to implement learning. Based on the problems, the purpose of the research is to maximize the role of companion teacher for the achievement of professional and pedagogic competencies in the workshop of the participants of teacher professional education in the Faculty of Teaching and Education of Mulawarman University. Qualitative research method with interview guidance and document to get in-depth data on how to maximize the role of companion teachers in the achievement of professional and pedagogic competencies in the workshop participants of professional education participants. Location of this research is on the Faculty of Teaching and Education of Mulawarman University, Samarinda City, East Kalimantan Province. Research respondents were 12 teachers of workshop facilitator. Descriptive data analysis is through interpretation of interview data. The conclusion of the research result, how to maximize the role of assistant teachers in workshop activities for the professional competence and pedagogic competence of professional teacher training program participants, through facilitation activities conducted by teachers of companion related to real problems faced by students in school, so that the workshop participants have professional competence and pedagogic as an initial competence before carrying out practical activities of field experience in school.

Keywords: companion teacher, professional and pedagogical competence, activities, workshop participants

Procedia PDF Downloads 189
1665 Implementation of the Quality Management System and Development of Organizational Learning: Case of Three Small and Medium-Sized Enterprises in Morocco

Authors: Abdelghani Boudiaf

Abstract:

The profusion of studies relating to the concept of organizational learning shows the importance that has been given to this concept in the management sciences. A few years ago, companies leaned towards ISO 9001 certification; this requires the implementation of the quality management system (QMS). In order for this objective to be achieved, companies must have a set of skills, which pushes them to develop learning through continuous training. The results of empirical research have shown that implementation of the QMS in the company promotes the development of learning. It should also be noted that several types of learning are developed in this sense. Given the nature of skills development is normative in the context of the quality demarche, companies are obliged to qualify and improve the skills of their human resources. Continuous training is the keystone to develop the necessary learning. To carry out continuous training, companies need to be able to identify their real needs by developing training plans based on well-defined engineering. The training process goes obviously through several stages. Initially, training has a general aspect, that is to say, it focuses on topics and actions of a general nature. Subsequently, this is done in a more targeted and more precise way to accompany the evolution of the QMS and also to make the changes decided each time (change of working method, change of practices, change of objectives, change of mentality, etc.). To answer our problematic we opted for the method of qualitative research. It should be noted that the case study method crosses several data collection techniques to explain and understand a phenomenon. Three cases of companies were studied as part of this research work using different data collection techniques related to this method.

Keywords: changing mentalities, continuing training, organizational learning, quality management system, skills development

Procedia PDF Downloads 110
1664 Evaluation of National Research Motivation Evolution with Improved Social Influence Network Theory Model: A Case Study of Artificial Intelligence

Authors: Yating Yang, Xue Zhang, Chengli Zhao

Abstract:

In the increasingly interconnected global environment brought about by globalization, it is crucial for countries to timely grasp the development motivations in relevant research fields of other countries and seize development opportunities. Motivation, as the intrinsic driving force behind actions, is abstract in nature, making it difficult to directly measure and evaluate. Drawing on the ideas of social influence network theory, the research motivations of a country can be understood as the driving force behind the development of its science and technology sector, which is simultaneously influenced by both the country itself and other countries/regions. In response to this issue, this paper improves upon Friedkin's social influence network theory and applies it to motivation description, constructing a dynamic alliance network and hostile network centered around the United States and China, as well as a sensitivity matrix, to remotely assess the changes in national research motivations under the influence of international relations. Taking artificial intelligence as a case study, the research reveals that the motivations of most countries/regions are declining, gradually shifting from a neutral attitude to a negative one. The motivation of the United States is hardly influenced by other countries/regions and remains at a high level, while the motivation of China has been consistently increasing in recent years. By comparing the results with real data, it is found that this model can reflect, to some extent, the trends in national motivations.

Keywords: influence network theory, remote assessment, relation matrix, dynamic sensitivity matrix

Procedia PDF Downloads 68
1663 An Exploration of Why Insider Fraud Is the Biggest Threat to Your Business

Authors: Claire Norman-Maillet

Abstract:

Insider fraud, otherwise known as occupational, employee, or internal fraud, is a financial crime threat. Perpetrated by defrauding (or attempting to defraud) one’s current, prospective, or past employer, an ‘employee’ covers anyone employed by the company, including board members and contractors. The Coronavirus pandemic has forced insider fraud into the spotlight, and it isn’t dimming. As the focus of most academics and practitioners has historically been on that of ‘external fraud’, insider fraud is often overlooked or not considered to be a real threat. However, since COVID-19 changed the working world, pushing most of us into remote or hybrid working, employers cannot easily keep an eye on what their staff are doing, which has led to reliance on trust and transparency. This, therefore, brings about an increased risk of insider fraud perpetration. The objective of this paper is to explore why insider fraud is, therefore, now the biggest threat to a business. To achieve the research objective, participating individuals within the financial crime sector (either as a practitioner or consultants) attended semi-structured interviews with the researcher. The principal recruitment strategy for these individuals was via the researcher’s LinkedIn network. The main findings in the research suggest that insider fraud has been ignored and rejected as a threat to a business, owing to a reluctance to admit that a colleague may perpetrate. A positive of the Coronavirus pandemic is that it has forced insider fraud into a more prominent position and giving it more importance on a business’ agenda and risk register. Despite insider fraud always having been a possibility (and therefore a risk) within any business, it is very rare that a business has given it the attention it requires until now, if at all. The research concludes that insider fraud needs to prioritised by all businesses, and even ahead of external fraud. The research also provides advice on how a business can add new or enhance existing controls to mitigate the risk.

Keywords: insider fraud, occupational fraud, COVID-19, COVID, coronavirus, pandemic, internal fraud, financial crime, economic crime

Procedia PDF Downloads 66
1662 Positive Effect of Manipulated Virtual Kinematic Intervention in Individuals with Traumatic Stiff Shoulder: Pilot Study

Authors: Isabella Schwartz, Ori Safran, Naama Karniel, Michal Abel, Adina Berko, Martin Seyres, Tamir Tsoar, Sigal Portnoy

Abstract:

Virtual Reality allows to manipulate the patient’s perception, thereby providing a motivational addition to real-time biofeedback exercises. We aimed to test the effect of manipulated virtual kinematic intervention on measures of active and passive Range of Motion (ROM), pain, and disability level in individuals with traumatic stiff shoulder. In a double-blinded study, patients with stiff shoulder following proximal humerus fracture and non-operative treatment were randomly divided into a non-manipulated feedback group (NM-group; N=6) and a manipulated feedback group (M-group; N=7). The shoulder ROM, pain, and the Disabilities of the Arm, Shoulder and Hand (DASH) scores were tested at baseline and after the 6 sessions, during which the subjects performed shoulder flexion and abduction in front of a graphic visualization of the shoulder angle. The biofeedback provided to the NM-group was the actual shoulder angle and the feedback provided to the M-group was manipulated so that 10° were constantly subtracted from the actual angle detected by the motion capture system. The M-group showed greater improvement in the active flexion ROM, with median and interquartile range of 197.1 (140.5-425.0) compared to 142.5 (139.1-151.3) for the NM-group (p=.046). Also, the M-group showed greater improvement in the DASH scores, with median and interquartile range of 67.7 (52.8-86.2) compared to 89.7 (83.8-98.3) for the NM-group (p=.022). Manipulated intervention is beneficial in individuals with traumatic stiff shoulder and should be further tested for other populations with orthopedic injuries.

Keywords: virtual reality, biofeedback, shoulder pain, range of motion

Procedia PDF Downloads 125
1661 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 202
1660 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling

Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines

Abstract:

The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.

Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances

Procedia PDF Downloads 157
1659 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 278
1658 The Communist Party of China’s Approach to Human Rights and the Death Penalty in China since 1979

Authors: Huang Gui

Abstract:

The issues of human rights and death penalty are always drawing attentions from international scholars, critics and observers, activities and Chinese scholars, and most of them looking at these problems are just doing with such legal or political from a single perspective, but the real relationship between Chinese political regime and legislation is often ignored. In accordance with the Constitution of P.R.C., Communist Party of China (CPC) does not merely play a key role in political field, but in legislation and law enforcement as well. Therefore, the legislation has to implement the party’s theory and outlook, and realize the party’s policies. So is the death penalty system, though it is only concrete punishment system. Considering this point, basic upon the introducing the relationship between CPC and legislation, this paper would like to explore the shifting of CPC’s outlook on human rights and the death penalty system changes in different eras. In Maoist era, the issue of human rights was rejected and deemed as an exclusion zone, and the death penalty was unjustifiably imposed; human rights were politically recognized and accepted in Deng era, but CPC has its own viewpoints on it. CPC emphasized on national security and stability in that era, and the individual human rights weren’t taken correspondingly and reasonably account of. The death penalty was abused and deemed as an important measure to control crime. In post-Deng, human rights were gradually developed and recognized. The term of ‘state respect and protect human rights’ is contained in Constitution of P.R.C., and the individual human rights are gradually valued, but the CPC still focus on state security, development, and stability, the individual right to life hasn’t been enough valued like the right to substance. Although the steps of reforming death penalty are taking, there are still 46 crimes punishable by death. CPC should change its outlook and pay more attention to the right to life, and try to abolish death penalty de facto and de jure.

Keywords: criminal law, communist party of China, death penalty, human rights, China

Procedia PDF Downloads 417
1657 Intelligent Control of Bioprocesses: A Software Application

Authors: Mihai Caramihai, Dan Vasilescu

Abstract:

The main research objective of the experimental bioprocess analyzed in this paper was to obtain large biomass quantities. The bioprocess is performed in 100 L Bioengineering bioreactor with 42 L cultivation medium made of peptone, meat extract and sodium chloride. The reactor was equipped with pH, temperature, dissolved oxygen, and agitation controllers. The operating parameters were 37 oC, 1.2 atm, 250 rpm and air flow rate of 15 L/min. The main objective of this paper is to present a case study to demonstrate that intelligent control, describing the complexity of the biological process in a qualitative and subjective manner as perceived by human operator, is an efficient control strategy for this kind of bioprocesses. In order to simulate the bioprocess evolution, an intelligent control structure, based on fuzzy logic has been designed. The specific objective is to present a fuzzy control approach, based on human expert’ rules vs. a modeling approach of the cells growth based on bioprocess experimental data. The kinetic modeling may represent only a small number of bioprocesses for overall biosystem behavior while fuzzy control system (FCS) can manipulate incomplete and uncertain information about the process assuring high control performance and provides an alternative solution to non-linear control as it is closer to the real world. Due to the high degree of non-linearity and time variance of bioprocesses, the need of control mechanism arises. BIOSIM, an original developed software package, implements such a control structure. The simulation study has showed that the fuzzy technique is quite appropriate for this non-linear, time-varying system vs. the classical control method based on a priori model.

Keywords: intelligent, control, fuzzy model, bioprocess optimization

Procedia PDF Downloads 327
1656 1-D Convolutional Neural Network Approach for Wheel Flat Detection for Freight Wagons

Authors: Dachuan Shi, M. Hecht, Y. Ye

Abstract:

With the trend of digitalization in railway freight transport, a large number of freight wagons in Germany have been equipped with telematics devices, commonly placed on the wagon body. A telematics device contains a GPS module for tracking and a 3-axis accelerometer for shock detection. Besides these basic functions, it is desired to use the integrated accelerometer for condition monitoring without any additional sensors. Wheel flats as a common type of failure on wheel tread cause large impacts on wagons and infrastructure as well as impulsive noise. A large wheel flat may even cause safety issues such as derailments. In this sense, this paper proposes a machine learning approach for wheel flat detection by using car body accelerations. Due to suspension systems, impulsive signals caused by wheel flats are damped significantly and thus could be buried in signal noise and disturbances. Therefore, it is very challenging to detect wheel flats using car body accelerations. The proposed algorithm considers the envelope spectrum of car body accelerations to eliminate the effect of noise and disturbances. Subsequently, a 1-D convolutional neural network (CNN), which is well known as a deep learning method, is constructed to automatically extract features in the envelope-frequency domain and conduct classification. The constructed CNN is trained and tested on field test data, which are measured on the underframe of a tank wagon with a wheel flat of 20 mm length in the operational condition. The test results demonstrate the good performance of the proposed algorithm for real-time fault detection.

Keywords: fault detection, wheel flat, convolutional neural network, machine learning

Procedia PDF Downloads 131
1655 Design of Two-Channel Quadrature Mirror Filter Banks Using a Transformation Approach

Authors: Ju-Hong Lee, Yi-Lin Shieh

Abstract:

Two-dimensional (2-D) quadrature mirror filter (QMF) banks have been widely considered for high-quality coding of image and video data at low bit rates. Without implementing subband coding, a 2-D QMF bank is required to have an exactly linear-phase response without magnitude distortion, i.e., the perfect reconstruction (PR) characteristics. The design problem of 2-D QMF banks with the PR characteristics has been considered in the literature for many years. This paper presents a transformation approach for designing 2-D two-channel QMF banks. Under a suitable one-dimensional (1-D) to two-dimensional (2-D) transformation with a specified decimation/interpolation matrix, the analysis and synthesis filters of the QMF bank are composed of 1-D causal and stable digital allpass filters (DAFs) and possess the 2-D doubly complementary half-band (DC-HB) property. This facilitates the design problem of the two-channel QMF banks by finding the real coefficients of the 1-D recursive DAFs. The design problem is formulated based on the minimax phase approximation for the 1-D DAFs. A novel objective function is then derived to obtain an optimization for 1-D minimax phase approximation. As a result, the problem of minimizing the objective function can be simply solved by using the well-known weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The novelty of the proposed design method is that the design procedure is very simple and the designed 2-D QMF bank achieves perfect magnitude response and possesses satisfactory phase response. Simulation results show that the proposed design method provides much better design performance and much less design complexity as compared with the existing techniques.

Keywords: Quincunx QMF bank, doubly complementary filter, digital allpass filter, WLS algorithm

Procedia PDF Downloads 226
1654 Research on Level Adjusting Mechanism System of Large Space Environment Simulator

Authors: Han Xiao, Zhang Lei, Huang Hai, Lv Shizeng

Abstract:

Space environment simulator is a device for spacecraft test. KM8 large space environment simulator built in Tianjing Space City is the largest as well as the most advanced space environment simulator in China. Large deviation of spacecraft level will lead to abnormally work of the thermal control device in spacecraft during the thermal vacuum test. In order to avoid thermal vacuum test failure, level adjusting mechanism system is developed in the KM8 large space environment simulator as one of the most important subsystems. According to the level adjusting requirements of spacecraft’s thermal vacuum tests, the four fulcrums adjusting model is established. By means of collecting level instruments and displacement sensors data, stepping motors controlled by PLC drive four supporting legs simultaneous movement. In addition, a PID algorithm is used to control the temperature of supporting legs and level instruments which long time work under the vacuum cold and black environment in KM8 large space environment simulator during thermal vacuum tests. Based on the above methods, the data acquisition and processing, the analysis and calculation, real time adjustment and fault alarming of the level adjusting mechanism system are implemented. The level adjusting accuracy reaches 1mm/m, and carrying capacity is 20 tons. Debugging showed that the level adjusting mechanism system of KM8 large space environment simulator can meet the thermal vacuum test requirement of the new generation spacecraft. The performance and technical indicators of the level adjusting mechanism system which provides important support for the development of spacecraft in China have been ahead of similar equipment in the world.

Keywords: space environment simulator, thermal vacuum test, level adjusting, spacecraft, parallel mechanism

Procedia PDF Downloads 248
1653 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 245
1652 Numerical Studies on 2D and 3D Boundary Layer Blockage and External Flow Choking at Wing in Ground Effect

Authors: K. Dhanalakshmi, N. Deepak, E. Manikandan, S. Kanagaraj, M. Sulthan Ariff Rahman, P. Chilambarasan C. Abhimanyu, C. A. Akaash Emmanuel Raj, V. R. Sanal Kumar

Abstract:

In this paper using a validated double precision, density-based implicit standard k-ε model, the detailed 2D and 3D numerical studies have been carried out to examine the external flow choking at wing-in-ground (WIG) effect craft. The CFD code is calibrated using the exact solution based on the Sanal flow choking condition for adiabatic flows. We observed that at the identical WIG effect conditions the numerically predicted 2D boundary layer blockage is significantly higher than the 3D case and as a result, the airfoil exhibited an early external flow choking than the corresponding wing, which is corroborated with the exact solution. We concluded that, in lieu of the conventional 2D numerical simulation, it is invariably beneficial to go for a realistic 3D simulation of the wing in ground effect, which is analogous and would have the aspects of a real-time parametric flow. We inferred that under the identical flying conditions the chances of external flow choking at WIG effect is higher for conventional aircraft than an aircraft facilitating a divergent channel effect at the bottom surface of the fuselage as proposed herein. We concluded that the fuselage and wings integrated geometry optimization can improve the overall aerodynamic performance of WIG craft. This study is a pointer to the designers and/or pilots for perceiving the zone of danger a priori due to the anticipated external flow choking at WIG effect craft for safe flying at the close proximity of the terrain and the dynamic surface of the marine.

Keywords: boundary layer blockage, chord dominated ground effect, external flow choking, WIG effect

Procedia PDF Downloads 271
1651 Dual Set Point Governor Control Structure with Common Optimum Temporary Droop Settings for both Islanded and Grid Connected Modes

Authors: Deepen Sharma, Eugene F. Hill

Abstract:

For nearly 100 years, hydro-turbine governors have operated with only a frequency set point. This natural governor action means that the governor responds with changing megawatt output to disturbances in system frequency. More and more, power system managers are demanding that governors operate with constant megawatt output. One way of doing this is to introduce a second set point in the control structure called a power set point. The control structure investigated and analyzed in this paper is unique in the way that it utilizes a power reference set point in addition to the conventional frequency reference set point. An optimum set of temporary droop parameters derived based on the turbine-generator inertia constant and the penstock water start time for stable islanded operation are shown to be also equally applicable for a satisfactory rate of generator loading during its grid connected mode. A theoretical development shows why this is the case. The performance of the control structure has been investigated and established based on the simulation study made in MATLAB/Simulink as well as through testing the real time controller performance on a 15 MW Kaplan Turbine and generator. Recordings have been made using the labVIEW data acquisition platform. The hydro-turbine governor control structure and its performance investigated in this paper thus eliminates the need to have a separate set of temporary droop parameters, one valid for islanded mode and the other for interconnected operations mode.

Keywords: frequency set point, hydro governor, interconnected operation, isolated operation, power set point

Procedia PDF Downloads 367