Search results for: Extended Park´s vector approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16482

Search results for: Extended Park´s vector approach

15552 An Improved Approach Based on MAS Architecture and Heuristic Algorithm for Systematic Maintenance

Authors: Abdelhadi Adel, Kadri Ouahab

Abstract:

This paper proposes an improved approach based on MAS Architecture and Heuristic Algorithm for systematic maintenance to minimize makespan. We have implemented a problem-solving approach for optimizing the processing time, methods based on metaheuristics. The proposed approach is inspired by the behavior of the human body. This hybridization is between a multi-agent system and inspirations of the human body, especially genetics. The effectiveness of our approach has been demonstrated repeatedly in this paper. To solve such a complex problem, we proposed an approach which we have used advanced operators such as uniform crossover set and single point mutation. The proposed approach is applied to three preventive maintenance policies. These policies are intended to maximize the availability or to maintain a minimum level of reliability during the production chain. The results show that our algorithm outperforms existing algorithms. We assumed that the machines might be unavailable periodically during the production scheduling.

Keywords: multi-agent systems, emergence, genetic algorithm, makespan, systematic maintenance, scheduling, hybrid flow shop scheduling

Procedia PDF Downloads 302
15551 Food Traceability for Small and Medium Enterprises Using Blockchain Technology

Authors: Amit Kohli, Pooja Lekhi, Gihan Adel Amin Hafez

Abstract:

Blockchain is a distributor ledger technology trend that extended to different fields and proved a remarkable success. Blockchain technology is a vital proliferation technique that recuperates the food supply chain traceability process. While tracing is the core of the food supply chain; still, a complex system mitigates the exceptional risk of food contamination, foodborne, food waste, and food fraud. In addition, the upsurge of food supply chain data variance and variety in the traceability system requires complete transparency, a secure, steadfast, sustainable, and efficient approach to face the food supply chain challenges. On the other hand, blockchain technical aspects merged with a detailed implementation plan, the advantages and challenges in food traceability have not been much elucidated for small and medium enterprises (SMEs.) This paper demonstrated the advantages and challenges of applying blockchain in SMEs combined with the success stories of firms implementing blockchain to cover the gap. Moreover, blockchain architecture in SMEs and how technology, organization, and environment frameworks can guarantee the success of blockchain implementation have been revealed.

Keywords: blockchain technology, small and medium enterprises, food traceability, blockchain architecture

Procedia PDF Downloads 192
15550 A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network

Authors: Ahmad Alwosheel, Ahmed Alqaraawi

Abstract:

This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time.

Keywords: artificial neural network, diarization, speaker indexing, speaker segmentation

Procedia PDF Downloads 505
15549 Utility, Satisfaction and Necessity of Urban Parks: An Empirical Study of Two Suburban Parks of Kolkata Metropolitan Area, India

Authors: Jaydip De

Abstract:

Urban parks are open places, green fields and riverside gardens usually maintained by public or private authorities, or eventually by both jointly; and utilized for a multidimensional purpose by the citizens. These parks are indeed the lung of urban centers. In urban socio-environmental setup, parks are the nucleus of social integration, community building, and physical development. In contemporary cities, these green places seem to perform as the panacea of congested, complex and stressful urban life. The alarmingly increasing urban population and the resultant congestion of high-rises are making life wearisome in neo-liberal cities. This has made the citizen always quest for open space and fresh air. In such a circumstance, the mere existence of parks is not capable of satisfying the growing aspirations. Therefore in this endeavour, a structured attempt is so made to empirically identify the utility, visitors’ satisfaction, and future needs through the cases of two urban parks of Kolkata Metropolitan Area, India. This study is principally based upon primary information collected through visitors’ perception survey conducted at the Chinsurah ground and Chandernagore strand. The correlation between different utility categories is identified and analyzed systematically. At the same time, indices like Weighted Satisfaction Score (WSS), Facility wise Satisfaction Index (FSI), Urban Park Satisfaction Index (UPSI) and Urban Park Necessity Index (UPNI) are advocated to quantify the visitors’ satisfaction and future necessities. It is explored that the most important utilities are passive in nature. Simultaneously, satisfaction levels of visitors are average, and their requirements are centred on the daily needs of the next generation, i.e., the children. Further, considering the visitors’ opinion planning measures are promulgated for holistic development of urban parks to revitalize sustainability of citified life.

Keywords: citified life, future needs, visitors’ satisfaction, urban parks, utility

Procedia PDF Downloads 179
15548 Coupled Analysis for Hazard Modelling of Debris Flow Due to Extreme Rainfall

Authors: N. V. Nikhil, S. R. Lee, Do Won Park

Abstract:

Korean peninsula receives about two third of the annual rainfall during summer season. The extreme rainfall pattern due to typhoon and heavy rainfall results in severe mountain disasters among which 55% of them are debris flows, a major natural hazard especially when occurring around major settlement areas. The basic mechanism underlined for this kind of failure is the unsaturated shallow slope failure by reduction of matric suction due to infiltration of water and liquefaction of the failed mass due to generation of positive pore water pressure leading to abrupt loss of strength and commencement of flow. However only an empirical model cannot simulate this complex mechanism. Hence, we have employed an empirical-physical based approach for hazard analysis of debris flow using TRIGRS, a debris flow initiation criteria and DAN3D in mountain Woonmyun, South Korea. Debris flow initiation criteria is required to discern the potential landslides which can transform into debris flow. DAN-3D, being a new model, does not have the calibrated values of rheology parameters for Korean conditions. Thus, in our analysis we have used the recent 2011 debris flow event in mountain Woonmyun san for calibration of both TRIGRS model and DAN-3D, thereafter identifying and predicting the debris flow initiation points, path, run out velocity, and area of spreading for future extreme rainfall based scenarios.

Keywords: debris flow, DAN-3D, extreme rainfall, hazard analysis

Procedia PDF Downloads 247
15547 Defining the Limits of No Load Test Parameters at Over Excitation to Ensure No Over-Fluxing of Core Based on a Case Study: A Perspective From Utilities

Authors: Pranjal Johri, Misbah Ul-Islam

Abstract:

Power Transformers are one of the most critical and failure prone entities in an electrical power system. It is an established practice that each design of a power transformer has to undergo numerous type tests for design validation and routine tests are performed on each and every power transformer before dispatch from manufacturer’s works. Different countries follow different standards for testing the transformers. Most common and widely followed standard for Power Transformers is IEC 60076 series. Though these standards put up a strict testing requirements for power transformers, however, few aspects of transformer characteristics and guaranteed parameters can be ensured by some additional tests. Based on certain observations during routine test of a transformer and analyzing the data of a large fleet of transformers, three propositions have been discussed and put forward to be included in test schedules and standards. The observations in the routine test raised questions on design flux density of transformer. In order to ensure that flux density in any part of the core & yoke does not exceed 1.9 tesla at 1.1 pu as well, following propositions need to be followed during testing:  From the data studied, it was evident that generally NLC at 1.1 pu is apporx. 3 times of No Load Current at 1 pu voltage.  During testing the power factor at 1.1 pu excitation, it must be comparable to calculated values from the Cold Rolled Grain Oriented steel material curves, including building factor.  A limit of 3 % to be extended for higher than rated voltages on difference in Vavg and Vrms, during no load testing.  Extended over excitation test to be done in case above propositions are observed to be violated during testing.

Keywords: power transfoemrs, no load current, DGA, power factor

Procedia PDF Downloads 104
15546 Task Based Language Learning: A Paradigm Shift in ESL/EFL Teaching and Learning: A Case Study Based Approach

Authors: Zehra Sultan

Abstract:

The study is based on the task-based language teaching approach which is found to be very effective in the EFL/ESL classroom. This approach engages learners to acquire the usage of authentic language skills by interacting with the real world through sequence of pedagogical tasks. The use of technology enhances the effectiveness of this approach. This study throws light on the historical background of TBLT and its efficacy in the EFL/ESL classroom. In addition, this study precisely talks about the implementation of this approach in the General Foundation Programme of Muscat College, Oman. It furnishes the list of the pedagogical tasks embedded in the language curriculum of General Foundation Programme (GFP) which are skillfully allied to the College Graduate Attributes. Moreover, the study also discusses the challenges pertaining to this approach from the point of view of teachers, students, and its classroom application. Additionally, the operational success of this methodology is gauged through formative assessments of the GFP, which is apparent in the students’ progress.

Keywords: task-based language teaching, authentic language, communicative approach, real world activities, ESL/EFL activities

Procedia PDF Downloads 126
15545 Behind Fuzzy Regression Approach: An Exploration Study

Authors: Lavinia B. Dulla

Abstract:

The exploration study of the fuzzy regression approach attempts to present that fuzzy regression can be used as a possible alternative to classical regression. It likewise seeks to assess the differences and characteristics of simple linear regression and fuzzy regression using the width of prediction interval, mean absolute deviation, and variance of residuals. Based on the simple linear regression model, the fuzzy regression approach is worth considering as an alternative to simple linear regression when the sample size is between 10 and 20. As the sample size increases, the fuzzy regression approach is not applicable to use since the assumption regarding large sample size is already operating within the framework of simple linear regression. Nonetheless, it can be suggested for a practical alternative when decisions often have to be made on the basis of small data.

Keywords: fuzzy regression approach, minimum fuzziness criterion, interval regression, prediction interval

Procedia PDF Downloads 302
15544 End-to-End Pyramid Based Method for Magnetic Resonance Imaging Reconstruction

Authors: Omer Cahana, Ofer Levi, Maya Herman

Abstract:

Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.

Keywords: magnetic resonance imaging, image reconstruction, pyramid network, deep learning

Procedia PDF Downloads 91
15543 A Novel Approach of Power Transformer Diagnostic Using 3D FEM Parametrical Model

Authors: M. Brandt, A. Peniak, J. Makarovič, P. Rafajdus

Abstract:

This paper deals with a novel approach of power transformers diagnostics. This approach identifies the exact location and the range of a fault in the transformer and helps to reduce operation costs related to handling of the faulty transformer, its disassembly and repair. The advantage of the approach is a possibility to simulate healthy transformer and also all faults, which can occur in transformer during its operation without its disassembling, which is very expensive in practice. The approach is based on creating frequency dependent impedance of the transformer by sweep frequency response analysis measurements and by 3D FE parametrical modeling of the fault in the transformer. The parameters of the 3D FE model are the position and the range of the axial short circuit. Then, by comparing the frequency dependent impedances of the parametrical models with the measured ones, the location and the range of the fault is identified. The approach was tested on a real transformer and showed high coincidence between the real fault and the simulated one.

Keywords: transformer, parametrical model of transformer, fault, sweep frequency response analysis, finite element method

Procedia PDF Downloads 483
15542 Liminality in Early Career Academic Identities: A Life History Approach

Authors: C. Morris, W. Ashall, K. Telling, L. Kadiwal, J. Kirby, S. Mwale

Abstract:

This paper addresses experiences of liminality in the early career phase of academia. Liminality is understood as a process moving from one state (in this case of being non-academic) to another (of being academic), caught between or moving in and out these modes of being. Drawing on life-history methods, a group of academics jointly reflected on experiences of the early career. Primarily focused on the theme of imposter syndrome at this career stage, the authors identified feelings of non-belonging and lack of fit with the academy, tracing the biographical, political, and affective dimensions of such responses. Uncertainty around status within seemingly impermeable hierarchies and barriers to progression in combination with our intersectional positionings shaped by sexism, racism, ableism, and classism, led to experiences of liminality, having not yet fully achieved the desired and potentially illusionary status of established academic. Findings are contextualised within the authors’ contrasting disciplinary, departmental, and institutional settings against a backdrop of neoliberalised academia. The paper thereby contributes nuanced understandings of early-career academic identities at a time when this career stage is ever more ill-defined, extended, precarious and uncertain, exposing ongoing impacts of inequities in the contemporary academic milieu.

Keywords: early career, identities, intersectionality, liminality

Procedia PDF Downloads 118
15541 The Role of Eclectic Approach to Teach Communicative Function at Secondary Level

Authors: Fariha Asif

Abstract:

The main purpose of this study was to investigate the effectiveness of eclectic approach in teaching of communicative functions. The objectives of the study were to get the information about the use of communicative functions through eclectic approach and to point out the most effective way of teaching functional communication and social interaction with the help of communicative activities through eclectic approach. The next step was to select sample from the selected population. As the research was descriptive so a questionnaire was developed on the basis of hypothesis and distributed to different selected schools of Lahore, Pakistan. Then data was tabulated, analyzed and interpreted through computer by finding percentages of different responses given by teachers to see the results. It was concluded that eclectic approach is effective in teaching communicative functions and communicative functions are better when taught through eclectic approach and communicative activities are more appropriate way of teaching communicative functions. It was found those teachers who were qualified in ELT gave better opinions as compare to those who did not have this degree. Techniques like presentations, dialogues and roleplay proved to be effective for teaching functional communication through communicative activities and also motivate the students not only in learning rules but also in using them to communicate with others.

Keywords: methodology, functions, teaching, ESP

Procedia PDF Downloads 569
15540 Analyzing the Results of Buildings Energy Audit by Using Grey Set Theory

Authors: Tooraj Karimi, Mohammadreza Sadeghi Moghadam

Abstract:

Grey set theory has the advantage of using fewer data to analyze many factors, and it is therefore more appropriate for system study rather than traditional statistical regression which require massive data, normal distribution in the data and few variant factors. So, in this paper grey clustering and entropy of coefficient vector of grey evaluations are used to analyze energy consumption in buildings of the Oil Ministry in Tehran. In fact, this article intends to analyze the results of energy audit reports and defines most favorable characteristics of system, which is energy consumption of buildings, and most favorable factors affecting these characteristics in order to modify and improve them. According to the results of the model, ‘the real Building Load Coefficient’ has been selected as the most important system characteristic and ‘uncontrolled area of the building’ has been diagnosed as the most favorable factor which has the greatest effect on energy consumption of building. Grey clustering in this study has been used for two purposes: First, all the variables of building relate to energy audit cluster in two main groups of indicators and the number of variables is reduced. Second, grey clustering with variable weights has been used to classify all buildings in three categories named ‘no standard deviation’, ‘low standard deviation’ and ‘non- standard’. Entropy of coefficient vector of Grey evaluations is calculated to investigate greyness of results. It shows that among the 38 buildings surveyed in terms of energy consumption, 3 cases are in standard group, 24 cases are in ‘low standard deviation’ group and 11 buildings are completely non-standard. In addition, clustering greyness of 13 buildings is less than 0.5 and average uncertainly of clustering results is 66%.

Keywords: energy audit, grey set theory, grey incidence matrixes, grey clustering, Iran oil ministry

Procedia PDF Downloads 374
15539 An Approach from Fichte as a Response to the Kantian Dualism of Subject and Object: The Unity of the Subject and Object in Both Theoretical and Ethical Possibility

Authors: Mengjie Liu

Abstract:

This essay aims at responding to the Kant arguments on how to fit the self-caused subject into the deterministic object which follows the natural laws. This essay mainly adopts the approach abstracted from Fichte’s “Wissenshaftslehre” (Doctrine of Science) to picture a possible solution to the conciliation of Kantian dualism. The Fichte approach is based on the unity of the theoretical and practical reason, which can be understood as a philosophical abstraction from ordinary experience combining both subject and object. This essay will discuss the general Kantian dualism problem and Fichte’s unity approach in the first part. Then the essay will elaborate on the achievement of this unity of the subject and object through Fichte’s “the I posits itself” process in the second section. The following third section is related to the ethical unity of subject and object based on the Fichte approach. The essay will also discuss the limitation of Fichte’s approach from two perspectives: (1) the theoretical possibility of the existence of the pure I and (2) Schelling’s statement that the Absolute I is a result rather than the originating act. This essay demonstrates a possible approach to unifying the subject and object supported by Fichte’s “Absolute I” and ethical theories and also points out the limitations of Fichte’s theories.

Keywords: Fichte, identity, Kantian dualism, Wissenshaftslehre

Procedia PDF Downloads 94
15538 Investment Projects Selection Problem under Hesitant Fuzzy Environment

Authors: Irina Khutsishvili

Abstract:

In the present research, a decision support methodology for the multi-attribute group decision-making (MAGDM) problem is developed, namely for the selection of investment projects. The objective of the investment project selection problem is to choose the best project among the set of projects, seeking investment, or to rank all projects in descending order. The project selection is made considering a set of weighted attributes. To evaluate the attributes in our approach, expert assessments are used. In the proposed methodology, lingual expressions (linguistic terms) given by all experts are used as initial attribute evaluations, since they are the most natural and convenient representation of experts' evaluations. Then lingual evaluations are converted into trapezoidal fuzzy numbers, and the aggregate trapezoidal hesitant fuzzy decision matrix will be built. The case is considered when information on the attribute weights is completely unknown. The attribute weights are identified based on the De Luca and Termini information entropy concept, determined in the context of hesitant fuzzy sets. The decisions are made using the extended Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) method under a hesitant fuzzy environment. Hence, a methodology is based on a trapezoidal valued hesitant fuzzy TOPSIS decision-making model with entropy weights. The ranking of alternatives is performed by the proximity of their distances to both the fuzzy positive-ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS). For this purpose, the weighted hesitant Hamming distance is used. An example of investment decision-making is shown that clearly explains the procedure of the proposed methodology.

Keywords: In the present research, a decision support methodology for the multi-attribute group decision-making (MAGDM) problem is developed, namely for the selection of investment projects. The objective of the investment project selection problem is to choose the best project among the set of projects, seeking investment, or to rank all projects in descending order. The project selection is made considering a set of weighted attributes. To evaluate the attributes in our approach, expert assessments are used. In the proposed methodology, lingual expressions (linguistic terms) given by all experts are used as initial attribute evaluations since they are the most natural and convenient representation of experts' evaluations. Then lingual evaluations are converted into trapezoidal fuzzy numbers, and the aggregate trapezoidal hesitant fuzzy decision matrix will be built. The case is considered when information on the attribute weights is completely unknown. The attribute weights are identified based on the De Luca and Termini information entropy concept, determined in the context of hesitant fuzzy sets. The decisions are made using the extended Technique for Order Performance by Similarity to Ideal Solution (TOPSIS) method under a hesitant fuzzy environment. Hence, a methodology is based on a trapezoidal valued hesitant fuzzy TOPSIS decision-making model with entropy weights. The ranking of alternatives is performed by the proximity of their distances to both the fuzzy positive-ideal solution (FPIS) and the fuzzy negative-ideal solution (FNIS). For this purpose, the weighted hesitant Hamming distance is used. An example of investment decision-making is shown that clearly explains the procedure of the proposed methodology.

Procedia PDF Downloads 118
15537 Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition

Authors: Latha Subbiah, Dhanalakshmi Samiappan

Abstract:

In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms.

Keywords: curvelet, decomposition, levelset, ultrasound

Procedia PDF Downloads 343
15536 Pressure Angle and Profile Shift Factor Effects on the Natural Frequency of Spur Tooth Design

Authors: Ali Raad Hassan

Abstract:

In this paper, an (irregular) case relating to base circle, root circle, and pressure angle has been discussed and a computer programme has been developed to simulate and plot spur gear tooth profile, including involute and trochoid curves based on the formulation of rack cutter using different values of pressure angle and profile shift factor and it gave the values of all important geometric parameters. The results showed the flexibility of this approach and versatility of the programme to draw many different cases of spur gear teeth of any module, pressure angle, profile shift factor, number of teeth and rack cutter tip radius. The procedure developed can be extended to produce finite element models of heretofore intractable geometrical forms, to exploring fabrication of nonstandard tooth forms also. Finite elements model of these irregular cases have been built using above programme, and modal analysis has been done using ANSYS software, and natural frequencies of these selected cases have been obtained and discussed.

Keywords: involute, trochoid, pressure angle, profile shift factor, natural frequency

Procedia PDF Downloads 272
15535 The Location of Park and Ride Facilities Using the Fuzzy Inference Model

Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas

Abstract:

Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.

Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location

Procedia PDF Downloads 325
15534 Competition, Stability, and Economic Growth: A Causality Approach

Authors: Mahvish Anwaar

Abstract:

Research Question: In this paper, we explore the causal relationship between banking competition, banking stability, and economic growth. Research Findings: The unbalanced panel data starting from 2000 to 2018 is collected to analyze the causality among banking competition, banking stability, and economic growth. The main focus of the study is to check the direction of causality among selected variables. The results of the study support the demand following, supply leading, feedback, and neutrality hypothesis conditional to different measures of banking competition, banking stability, and economic growth. Theoretical Implication: Jayakumar, Pradhan, Dash, Maradana, and Gaurav (2018) proposed a theoretical model of the causal relationship between banking competition, banking stability, and economic growth by using different indicators. So, we empirically test the proposed indicators in our study. This study makes a contribution to the literature by showing the defined relationship between developing and developed countries. Policy Implications: The study covers various policy implications regarding investors to analyze how to properly manage their finances, and government agencies will take help from the present study to find the best and most suitable policies by examining how the economy can grow concerning its finances.

Keywords: competition, stability, economic growth, vector auto-regression, granger causality

Procedia PDF Downloads 64
15533 Methodological Aspect of Emergy Accounting in Co-Production Branching Systems

Authors: Keshab Shrestha, Hung-Suck Park

Abstract:

Emergy accounting of the systems networks is guided by a definite rule called ‘emergy algebra’. The systems networks consist of two types of branching. These are the co-product branching and split branching. The emergy accounting procedure for both the branching types is different. According to the emergy algebra, each branch in the co-product branching has different transformity values whereas the split branching has the same transformity value. After the transformity value of each branch is determined, the emergy is calculated by multiplying this with the energy. The aim of this research is to solve the problems in determining the transformity values in the co-product branching through the introduction of a new methodology, the modified physical quantity method. Initially, the existing methodologies for emergy accounting in the co-product branching is discussed and later, the modified physical quantity method is introduced with a case study of the Eucalyptus pulp production. The existing emergy accounting methodologies in the co-product branching has wrong interpretations with incorrect emergy calculations. The modified physical quantity method solves those problems of emergy accounting in the co-product branching systems. The transformity value calculated for each branch is different and also applicable in the emergy calculations. The methodology also strictly follows the emergy algebra rules. This new modified physical quantity methodology is a valid approach in emergy accounting particularly in the multi-production systems networks.

Keywords: co-product branching, emergy accounting, emergy algebra, modified physical quantity method, transformity value

Procedia PDF Downloads 293
15532 Land Rights, Policy and Cultural Identity in Uganda: Case of the Basongora Community

Authors: Edith Kamakune

Abstract:

As much as Indigenous rights are presumed to be part of the broad human rights regime, members of the indigenous communities have continually suffered violations, exclusions, and threat. There are a number of steps taken from the international community in trying to bridge the gap, and this has been through the inclusion of provisions as well as the passing of conventions and declarations with specific reference to the rights of indigenous peoples. Some examples of indigenous people include theSiberian Yupik of St Lawrence Island; the Ute of Utah; the Cree of Alberta, and the Xosa andKhoiKhoi of Southern Africa. Uganda’s wide cultural heritage has played a key role in the failure to pay special attention to the needs of the rights of indigenous peoples. The 1995 Constitution and the Land Act of 1998 provide for abstract land rights without necessarily paying attention to indigenous communities’ special needs. Basongora are a pastoralist community in Western Uganda whose ancestral land is the present Queen Elizabeth National Park of Western Uganda, Virunga National Park of Eastern Democratic Republic of Congo, and the small percentage of the low lands under the Rwenzori Mountains. Their values and livelihood are embedded in their strong attachment to the land, and this has been at stake for the last about 90 Years. This research was aimed atinvestigating the relationship between land rights and the right to cultural identity among indigenous communities, looking at the policy available on land and culture, and whether the policies are sensitive of the specific issues of vulnerable ethnic groups; and largely the effect of land on the right to cultural identity. The research was guided by three objectives: to examine and contextualize the concept of land rights among the Basongora community; to assess the policy frame work available for the protection of the Basongora community; to investigate the forms of vulnerability of the Basongora community. Quantitative and qualitative methods were used. a case of Kaseseand Kampala Districts were purposefully selected .138 people were recruited through random and nonrandom techniques to participate in the study, and these were 70 questionnaire respondents; 20 face to face interviews respondents; 5 key informants, and 43 participants in focus group discussions; The study established that Land is communally held and used and thatit continues to be a central source of livelihood for the Basongora; land rights are important in multiplication of herds; preservation, development, and promotion of culture and language. Research found gaps in the policy framework since the policies are concerned with tenure issues and the general provisions areambiguous. Oftenly, the Basongora are not called upon to participate in decision making processes, even on issues that affect them. The research findings call forauthorities to allow Basongora to access Queen Elizabeth National Park land for pasture during particular seasons of the year, especially during the dry seasons; land use policy; need for a clear alignment of the description of indigenous communitiesunder the constitution (Uganda, 1995) to the international definition.

Keywords: cultural identity, land rights, protection, uganda

Procedia PDF Downloads 157
15531 A Review of Research on Pre-training Technology for Natural Language Processing

Authors: Moquan Gong

Abstract:

In recent years, with the rapid development of deep learning, pre-training technology for natural language processing has made great progress. The early field of natural language processing has long used word vector methods such as Word2Vec to encode text. These word vector methods can also be regarded as static pre-training techniques. However, this context-free text representation brings very limited improvement to subsequent natural language processing tasks and cannot solve the problem of word polysemy. ELMo proposes a context-sensitive text representation method that can effectively handle polysemy problems. Since then, pre-training language models such as GPT and BERT have been proposed one after another. Among them, the BERT model has significantly improved its performance on many typical downstream tasks, greatly promoting the technological development in the field of natural language processing, and has since entered the field of natural language processing. The era of dynamic pre-training technology. Since then, a large number of pre-trained language models based on BERT and XLNet have continued to emerge, and pre-training technology has become an indispensable mainstream technology in the field of natural language processing. This article first gives an overview of pre-training technology and its development history, and introduces in detail the classic pre-training technology in the field of natural language processing, including early static pre-training technology and classic dynamic pre-training technology; and then briefly sorts out a series of enlightening technologies. Pre-training technology, including improved models based on BERT and XLNet; on this basis, analyze the problems faced by current pre-training technology research; finally, look forward to the future development trend of pre-training technology.

Keywords: natural language processing, pre-training, language model, word vectors

Procedia PDF Downloads 60
15530 Artificial Intelligence Based Predictive Models for Short Term Global Horizontal Irradiation Prediction

Authors: Kudzanayi Chiteka, Wellington Makondo

Abstract:

The whole world is on the drive to go green owing to the negative effects of burning fossil fuels. Therefore, there is immediate need to identify and utilise alternative renewable energy sources. Among these energy sources solar energy is one of the most dominant in Zimbabwe. Solar power plants used to generate electricity are entirely dependent on solar radiation. For planning purposes, solar radiation values should be known in advance to make necessary arrangements to minimise the negative effects of the absence of solar radiation due to cloud cover and other naturally occurring phenomena. This research focused on the prediction of Global Horizontal Irradiation values for the sixth day given values for the past five days. Artificial intelligence techniques were used in this research. Three models were developed based on Support Vector Machines, Radial Basis Function, and Feed Forward Back-Propagation Artificial neural network. Results revealed that Support Vector Machines gives the best results compared to the other two with a mean absolute percentage error (MAPE) of 2%, Mean Absolute Error (MAE) of 0.05kWh/m²/day root mean square (RMS) error of 0.15kWh/m²/day and a coefficient of determination of 0.990. The other predictive models had prediction accuracies of MAPEs of 4.5% and 6% respectively for Radial Basis Function and Feed Forward Back-propagation Artificial neural network. These two models also had coefficients of determination of 0.975 and 0.970 respectively. It was found that prediction of GHI values for the future days is possible using artificial intelligence-based predictive models.

Keywords: solar energy, global horizontal irradiation, artificial intelligence, predictive models

Procedia PDF Downloads 274
15529 Comparison of Different Machine Learning Algorithms for Solubility Prediction

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.

Keywords: random forest, machine learning, comparison, feature extraction

Procedia PDF Downloads 42
15528 Plasma Properties Effect on Fluorescent Tube Plasma Antenna Performance

Authors: A. N. Dagang, E. I. Ismail, Z. Zakaria

Abstract:

This paper presents the analysis on the performance of monopole antenna with fluorescent tubes. In this research, the simulation and experimental approach is conducted. The fluorescent tube with different length and size is designed using Computer Simulation Technology (CST) software and the characteristics of antenna parameter are simulated throughout the software. CST was used to simulate antenna parameters such as return loss, resonant frequency, gain and directivity. Vector Network Analyzer (VNA) was used to measure the return loss of plasma antenna in order to validate the simulation results. In the simulation and experiment, the supply frequency is set starting from 1 GHz to 10 GHz. The results show that the return loss of plasma antenna changes when size of fluorescent tubes is varied, correspond to the different plasma properties. It shows that different values of plasma properties such as plasma frequency and collision frequency gives difference result of return loss, gain and directivity. For the gain, the values range from 2.14 dB to 2.36 dB. The return loss of plasma antenna offers higher value range from -22.187 dB to -32.903 dB. The higher the values of plasma frequency and collision frequency, the higher return loss can be obtained. The values obtained are comparative to the conventional type of metal antenna.

Keywords: plasma antenna, fluorescent tube, CST, plasma parameters

Procedia PDF Downloads 388
15527 Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes

Authors: David F. Nettleton, Christian Wasiak, Jonas Dorissen, David Gillen, Alexandr Tretyak, Elodie Bugnicourt, Alejandro Rosales

Abstract:

In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.

Keywords: calibration, data modeling, industrial processes, machine learning

Procedia PDF Downloads 300
15526 Submarine Topography and Beach Survey of Gang-Neung Port in South Korea, Using Multi-Beam Echo Sounder and Shipborne Mobile Light Detection and Ranging System

Authors: Won Hyuck Kim, Chang Hwan Kim, Hyun Wook Kim, Myoung Hoon Lee, Chan Hong Park, Hyeon Yeong Park

Abstract:

We conducted submarine topography & beach survey from December 2015 and January 2016 using multi-beam echo sounder EM3001(Kongsberg corporation) & Shipborne Mobile LiDAR System. Our survey area were the Anmok beach in Gangneung, South Korea. We made Shipborne Mobile LiDAR System for these survey. Shipborne Mobile LiDAR System includes LiDAR (RIEGL LMS-420i), IMU ((Inertial Measurement Unit, MAGUS Inertial+) and RTKGNSS (Real Time Kinematic Global Navigation Satellite System, LEIAC GS 15 GS25) for beach's measurement, LiDAR's motion compensation & precise position. Shipborne Mobile LiDAR System scans beach on the movable vessel using the laser. We mounted Shipborne Mobile LiDAR System on the top of the vessel. Before beach survey, we conducted eight circles IMU calibration survey for stabilizing heading of IMU. This exploration should be as close as possible to the beach. But our vessel could not come closer to the beach because of latency objects in the water. At the same time, we conduct submarine topography survey using multi-beam echo sounder EM3001. A multi-beam echo sounder is a device observing and recording the submarine topography using sound wave. We mounted multi-beam echo sounder on left side of the vessel. We were equipped with a motion sensor, DGNSS (Differential Global Navigation Satellite System), and SV (Sound velocity) sensor for the vessel's motion compensation, vessel's position, and the velocity of sound of seawater. Shipborne Mobile LiDAR System was able to reduce the consuming time of beach survey rather than previous conventional methods of beach survey.

Keywords: Anmok, beach survey, Shipborne Mobile LiDAR System, submarine topography

Procedia PDF Downloads 430
15525 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques

Authors: Gizem Eser Erdek

Abstract:

This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.

Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet

Procedia PDF Downloads 79
15524 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 56
15523 The Capabilities Approach as a Future Alternative to Neoliberal Higher Education in the MENA Region

Authors: Ranya Elkhayat

Abstract:

This paper aims at offering a futures study for higher education in the Middle East. Paying special attention to the negative impacts of neoliberalism, the paper will demonstrate how higher education is now commodified, corporatized and how arts and humanities are eschewed in favor of science and technology. This conceptual paper argues against the neoliberal agenda and aims at providing an alternative exemplified in the Capabilities Approach with special reference to Martha Nussbaum’s theory. The paper is divided into four main parts: the current state of higher education under neoliberal values, a prediction of the conditions of higher education in the near future, the future of higher education using the theoretical framework of the Capabilities Approach, and finally, some areas of concern regarding the approach. The implications of the study demonstrate that Nussbaum’s Capabilities Approach will ensure that the values of education are preserved while avoiding the pitfalls of neoliberalism.

Keywords: capabilities approach, education future, higher education, MENA

Procedia PDF Downloads 197