Search results for: multipoint network
128 Initiative Strategies on How to Increasing Value Add of the Recycling Business
Authors: Yananda Siraphatthada
Abstract:
The current study was the succession of a previous study on value added of recycling business management. Its aims are to 1) explore conditions on how to increasing value add of Thai recycling business, and 2) exam the implementation of the 3-staged plan (short, medium, and long term), suggested by the former study, to increase value added of the recycling business as immediate mechanisms to accelerate government operation. Quantitative and qualitative methods were utilized in this research. A qualitative research consisted of in-depth interviews and focus group discussions. Responses were obtained from owners of the waste separation plants, and recycle shops, as well as officers in relevant governmental agencies. They were randomly selected via Quota Sampling. Data was analyzed via content analysis. The sample used for quantitative method consisted of 1,274 licensed recycling operators in eight provinces. The operators were randomly stratified via sampling method. Data were analyzed via descriptive statistics frequency, percentage, average (Mean) and standard deviation.The study recommended three-staged plan: short, medium, and long terms. The plan included the development of logistics, the provision of quality market/plants, the amendment of recycling rules/regulation, the restructuring recycling business, the establishment of green-purchasing recycling center, support for the campaigns run by the International Green Purchasing Network (IGPN), conferences/workshops as a public forum to share insights among experts/concern people.
Keywords: Strategies, Value Added, Recycle Business.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1723127 A Reference Framework Integrating Lean and Green Principles within Supply Chain Management
Authors: M. Bortolini, E. Ferrari, F. G. Galizia, C. Mora
Abstract:
In the last decades, an increasing set of companies adopted lean philosophy to improve their productivity and efficiency promoting the so-called continuous improvement concept, reducing waste of time and cutting off no-value added activities. In parallel, increasing attention rises toward green practice and management through the spread of the green supply chain pattern, to minimise landfilled waste, drained wastewater and pollutant emissions. Starting from a review on contributions deepening lean and green principles applied to supply chain management, the most relevant drivers to measure the performance of industrial processes are pointed out. Specific attention is paid on the role of cost because it is of key importance and it crosses both lean and green principles. This analysis leads to figure out an original reference framework for integrating lean and green principles in designing and managing supply chains. The proposed framework supports the application, to the whole value chain or to parts of it, e.g. distribution network, assembly system, job-shop, storage system etc., of the lean-green integrated perspective. Evidences show that the combination of the lean and green practices lead to great results, higher than the sum of the performances from their separate application. Lean thinking has beneficial effects on green practices and, at the same time, methods allowing environmental savings generate positive effects on time reduction and process quality increase.
Keywords: Environmental sustainability, green supply chain, integrated framework, lean thinking, supply chain management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2784126 Cyber Fraud Schemes: Modus Operandi, Tools and Techniques, and the Role of European Legislation as a Defense Strategy
Authors: Papathanasiou Anastasios, Liontos George, Liagkou Vasiliki, Glavas Euripides
Abstract:
The purpose of this paper is to describe the growing problem of various cyber fraud schemes that exist on the internet and are currently among the most prevalent. The main focus of this paper is to provide a detailed description of the modus operandi, tools, and techniques utilized in four basic typologies of cyber frauds: Business Email Compromise (BEC) attacks, investment fraud, romance scams, and online sales fraud. The paper aims to shed light on the methods employed by cybercriminals in perpetrating these types of fraud, as well as the strategies they use to deceive and victimize individuals and businesses on the internet. Furthermore, this study outlines defense strategies intended to tackle the issue head-on, with a particular emphasis on the crucial role played by European legislation. European legislation has proactively adapted to the evolving landscape of cyber fraud, striving to enhance cybersecurity awareness, bolster user education, and implement advanced technical controls to mitigate associated risks. The paper evaluates the advantages and innovations brought about by the European legislation while also acknowledging potential flaws that cybercriminals might exploit. As a result, recommendations for refining the legislation are offered in this study in order to better address this pressing issue.
Keywords: Business email compromise, cybercrime, European legislation, investment fraud, Network and Information Security, online sales fraud, romance scams.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 196125 Quality Management in Spice Paprika Production as a Synergy of Internal and External Quality Measures
Authors: É. Kónya, E. Szabó, I. Bata-Vidács, T. Deák, M. Ottucsák, N. Adányi, A. Székács
Abstract:
Spice paprika is a major spice commodity in the European Union (EU), produced locally and imported from non-EU countries, reported not only for chemical and microbiological contamination, but also for fraud. The effective interaction between producers’ quality management practices and government and EU activities is described on the example of spice paprika production and control in Hungary, a country of leading spice paprika producer and per capita consumer in Europe. To demonstrate the importance of various contamination factors in the Hungarian production and EU trade of spice paprika, several aspects concerning food safety of this commodity are presented. Alerts in the Rapid Alert System for Food and Feed (RASFF) of the EU between 2005 and 2013, as well as Hungarian state inspection results on spice paprika in 2004 are discussed, and quality non-compliance claims regarding spice paprika among EU member states are summarized in by means of network analysis. Quality assurance measures established along the spice paprika production technology chain at the leading Hungarian spice paprika manufacturer, Kalocsai Fűszerpaprika Zrt. are surveyed with main critical control points identified. The structure and operation of the Hungarian state food safety inspection system is described. Concerted performance of the latter two quality management systems illustrates the effective interaction between internal (manufacturer) and external (state) quality control measures.Keywords: Spice paprika, quality control, reporting mechanisms, RASFF, vulnerable points, HACCP, BRC Global Standard.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1968124 A Growing Natural Gas Approach for Evaluating Quality of Software Modules
Authors: Parvinder S. Sandhu, Sandeep Khimta, Kiranpreet Kaur
Abstract:
The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.
Keywords: Growing Neural Gas, data clustering, fault prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1865123 PetriNets Manipulation to Reduce Roaming Duration: Criterion to Improve Handoff Management
Authors: Hossam el-ddin Mostafa, Pavel Čičak
Abstract:
IETF RFC 2002 originally introduced the wireless Mobile-IP protocol to support portable IP addresses for mobile devices that often change their network access points to the Internet. The inefficiency of this protocol mainly within the handoff management produces large end-to-end packet delays, during registration process, and further degrades the system efficiency due to packet losses between subnets. The criterion to initiate a simple and fast full-duplex connection between the home agent and foreign agent, to reduce the roaming duration, is a very important issue to be considered by a work in this paper. State-transition Petri-Nets of the modeling scenario-based CIA: communication inter-agents procedure as an extension to the basic Mobile-IP registration process was designed and manipulated. The heuristic of configuration file during practical Setup session for registration parameters, on Cisco platform Router-1760 using IOS 12.3 (15)T is created. Finally, stand-alone performance simulations results from Simulink Matlab, within each subnet and also between subnets, are illustrated for reporting better end-to-end packet delays. Results verified the effectiveness of our Mathcad analytical manipulation and experimental implementation. It showed lower values of end-to-end packet delay for Mobile-IP using CIA procedure. Furthermore, it reported packets flow between subnets to improve packet losses between subnets.Keywords: Cisco configuration, handoff, packet delay, Petri-Nets, registration process, Simulink.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1312122 Statistical Feature Extraction Method for Wood Species Recognition System
Authors: Mohd Iz'aan Paiz Bin Zamri, Anis Salwa Mohd Khairuddin, Norrima Mokhtar, Rubiyah Yusof
Abstract:
Effective statistical feature extraction and classification are important in image-based automatic inspection and analysis. An automatic wood species recognition system is designed to perform wood inspection at custom checkpoints to avoid mislabeling of timber which will results to loss of income to the timber industry. The system focuses on analyzing the statistical pores properties of the wood images. This paper proposed a fuzzy-based feature extractor which mimics the experts’ knowledge on wood texture to extract the properties of pores distribution from the wood surface texture. The proposed feature extractor consists of two steps namely pores extraction and fuzzy pores management. The total number of statistical features extracted from each wood image is 38 features. Then, a backpropagation neural network is used to classify the wood species based on the statistical features. A comprehensive set of experiments on a database composed of 5200 macroscopic images from 52 tropical wood species was used to evaluate the performance of the proposed feature extractor. The advantage of the proposed feature extraction technique is that it mimics the experts’ interpretation on wood texture which allows human involvement when analyzing the wood texture. Experimental results show the efficiency of the proposed method.Keywords: Classification, fuzzy, inspection system, image analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743121 Fuzzy Logic Based Improved Range Free Localization for Wireless Sensor Networks
Authors: Ashok Kumar, Vinod Kumar
Abstract:
Wireless Sensor Networks (WSNs) are used to monitor/observe vast inaccessible regions through deployment of large number of sensor nodes in the sensing area. For majority of WSN applications, the collected data needs to be combined with geographic information of its origin to make it useful for the user; information received from remote Sensor Nodes (SNs) that are several hops away from base station/sink is meaningless without knowledge of its source. In addition to this, location information of SNs can also be used to propose/develop new network protocols for WSNs to improve their energy efficiency and lifetime. In this paper, range free localization protocols for WSNs have been proposed. The proposed protocols are based on weighted centroid localization technique, where the edge weights of SNs are decided by utilizing fuzzy logic inference for received signal strength and link quality between the nodes. The fuzzification is carried out using (i) Mamdani, (ii) Sugeno, and (iii) Combined Mamdani Sugeno fuzzy logic inference. Simulation results demonstrate that proposed protocols provide better accuracy in node localization compared to conventional centroid based localization protocols despite presence of unintentional radio frequency interference from radio frequency (RF) sources operating in same frequency band.
Keywords: localization, range free, received signal strength, link quality indicator, Mamdani fuzzy logic inference, Sugeno fuzzy logic inference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2631120 Early Registration : Criterion to Improve Communication-Inter Agents in Mobile-IP Protocol
Authors: Hossam el-ddin Mostafa, Pavel Čičak
Abstract:
In IETF RFC 2002, Mobile-IP was developed to enable Laptobs to maintain Internet connectivity while moving between subnets. However, the packet loss that comes from switching subnets arises because network connectivity is lost while the mobile host registers with the foreign agent and this encounters large end-to-end packet delays. The criterion to initiate a simple and fast full-duplex connection between the home agent and foreign agent, to reduce the roaming duration, is a very important issue to be considered by a work in this paper. State-transition Petri-Nets of the modeling scenario-based CIA: communication inter-agents procedure as an extension to the basic Mobile-IP registration process was designed and manipulated to describe the system in discrete events. The heuristic of configuration file during practical Setup session for registration parameters, on Cisco platform Router-1760 using IOS 12.3 (15)T and TFTP server S/W is created. Finally, stand-alone performance simulations from Simulink Matlab, within each subnet and also between subnets, are illustrated for reporting better end-toend packet delays. Results verified the effectiveness of our Mathcad analytical manipulation and experimental implementation. It showed lower values of end-to-end packet delay for Mobile-IP using CIA procedure-based early registration. Furthermore, it reported packets flow between subnets to improve losses between subnets.Keywords: Cisco configuration, handoff, Mobile-IP, packetdelay, Petri-Nets, registration process, Simulink
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391119 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images
Authors: SP. Chokkalingam, K. Komathy
Abstract:
Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.
Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2479118 Managing Iterations in Product Design and Development
Authors: K. Aravindhan, Trishit Bandyopadhyay, Mahesh Mehendale, Supriya Kumar De
Abstract:
The inherent iterative nature of product design and development poses significant challenge to reduce the product design and development time (PD). In order to shorten the time to market, organizations have adopted concurrent development where multiple specialized tasks and design activities are carried out in parallel. Iterative nature of work coupled with the overlap of activities can result in unpredictable time to completion and significant rework. Many of the products have missed the time to market window due to unanticipated or rather unplanned iteration and rework. The iterative and often overlapped processes introduce greater amounts of ambiguity in design and development, where the traditional methods and tools of project management provide less value. In this context, identifying critical metrics to understand the iteration probability is an open research area where significant contribution can be made given that iteration has been the key driver of cost and schedule risk in PD projects. Two important questions that the proposed study attempts to address are: Can we predict and identify the number of iterations in a product development flow? Can we provide managerial insights for a better control over iteration? The proposal introduces the concept of decision points and using this concept intends to develop metrics that can provide managerial insights into iteration predictability. By characterizing the product development flow as a network of decision points, the proposed research intends to delve further into iteration probability and attempts to provide more clarity.
Keywords: Decision Points, Iteration, Product Design, Rework.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2192117 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations
Authors: Yehjune Heo
Abstract:
Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.
Keywords: Anti-spoofing, CNN, fingerprint recognition, GAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 593116 Applying Case-Based Reasoning in Supporting Strategy Decisions
Authors: S. M. Seyedhosseini, A. Makui, M. Ghadami
Abstract:
Globalization and therefore increasing tight competition among companies, have resulted to increase the importance of making well-timed decision. Devising and employing effective strategies, that are flexible and adaptive to changing market, stand a greater chance of being effective in the long-term. In other side, a clear focus on managing the entire product lifecycle has emerged as critical areas for investment. Therefore, applying wellorganized tools to employ past experience in new case, helps to make proper and managerial decisions. Case based reasoning (CBR) is based on a means of solving a new problem by using or adapting solutions to old problems. In this paper, an adapted CBR model with k-nearest neighbor (K-NN) is employed to provide suggestions for better decision making which are adopted for a given product in the middle of life phase. The set of solutions are weighted by CBR in the principle of group decision making. Wrapper approach of genetic algorithm is employed to generate optimal feature subsets. The dataset of the department store, including various products which are collected among two years, have been used. K-fold approach is used to evaluate the classification accuracy rate. Empirical results are compared with classical case based reasoning algorithm which has no special process for feature selection, CBR-PCA algorithm based on filter approach feature selection, and Artificial Neural Network. The results indicate that the predictive performance of the model, compare with two CBR algorithms, in specific case is more effective.
Keywords: Case based reasoning, Genetic algorithm, Groupdecision making, Product management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2174115 An Application of Path Planning Algorithms for Autonomous Inspection of Buried Pipes with Swarm Robots
Authors: Richard Molyneux, Christopher Parrott, Kirill Horoshenkov
Abstract:
This paper aims to demonstrate how various algorithms can be implemented within swarms of autonomous robots to provide continuous inspection within underground pipeline networks. Current methods of fault detection within pipes are costly, time consuming and inefficient. As such, solutions tend toward a more reactive approach, repairing faults, as opposed to proactively seeking leaks and blockages. The paper presents an efficient inspection method, showing that autonomous swarm robotics is a viable way of monitoring underground infrastructure. Tailored adaptations of various Vehicle Routing Problems (VRP) and path-planning algorithms provide a customised inspection procedure for complicated networks of underground pipes. The performance of multiple algorithms is compared to determine their effectiveness and feasibility. Notable inspirations come from ant colonies and stigmergy, graph theory, the k-Chinese Postman Problem ( -CPP) and traffic theory. Unlike most swarm behaviours which rely on fast communication between agents, underground pipe networks are a highly challenging communication environment with extremely limited communication ranges. This is due to the extreme variability in the pipe conditions and relatively high attenuation of acoustic and radio waves with which robots would usually communicate. This paper illustrates how to optimise the inspection process and how to increase the frequency with which the robots pass each other, without compromising the routes they are able to take to cover the whole network.
Keywords: Autonomous inspection, buried pipes, stigmergy, swarm intelligence, vehicle routing problem.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014114 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration
Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith
Abstract:
Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.Keywords: Multimodal image registration, GAN, cycle consistency, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810113 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life due to the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or COVID-19 induced pneumonia. The early prediction and classification of such lung diseases help reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans are pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publicly available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.
Keywords: CT scans, COVID-19, deep learning, image processing, pneumonia, lung disease.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 610112 Experimental and Numerical Study of the Effect of Lateral Wind on the Feeder Airship
Authors: A. Suñol, D. Vucinic, S.Vanlanduit, T. Markova, A. Aksenov, I. Moskalyov
Abstract:
Feeder is one of the airships of the Multibody Advanced Airship for Transport (MAAT) system, under development within the EU FP7 project. MAAT is based on a modular concept composed of two different parts that have the possibility to join; respectively they are the so-called Cruiser and Feeder, designed on the lighter than air principle. Feeder, also named ATEN (Airship Transport Elevator Network), is the smaller one which joins the bigger one, Cruiser, also named PTAH (Photovoltaic modular Transport Airship for High altitude),envisaged to happen at 15km altitude. During the MAAT design phase, the aerodynamic studies of the both airships and their interactions are analyzed. The objective of these studies is to understand the aerodynamic behavior of all the preselected configurations, as an important element in the overall MAAT system design. The most of these configurations are only simulated by CFD, while the most feasible one is experimentally analyzed in order to validate and thrust the CFD predictions. This paper presents the numerical and experimental investigation of the Feeder “conical like" shape configuration. The experiments are focused on the aerodynamic force coefficients and the pressure distribution over the Feeder outer surface, while the numerical simulation cover also the analysis of the velocity and pressure distribution. Finally, the wind tunnel experiment is compared with its CFD model in order to validate such specific simulations with respective experiments and to better understand the difference between the wind tunnel and in-flight circumstances.
Keywords: MAAT project Feeder, CFD simulations, wind tunnel experiments, lateral wind influence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2573111 A Survey of WhatsApp as a Tool for Instructor-Learner Dialogue, Learner-Content Dialogue, and Learner-Learner Dialogue
Authors: Ebrahim Panah, Muhammad Yasir Babar
Abstract:
Thanks to the development of online technology and social networks, people are able to communicate as well as learn. WhatsApp is a popular social network which is growingly gaining popularity. This app can be used for communication as well as education. It can be used for instructor-learner, learner-learner, and learner-content interactions; however, very little knowledge is available on these potentials of WhatsApp. The current study was undertaken to investigate university students’ perceptions of WhatsApp used as a tool for instructor-learner dialogue, learner-content dialogue, and learner-learner dialogue. The study adopted a survey approach and distributed the questionnaire developed by Google Forms to 54 (11 males and 43 females) university students. The obtained data were analyzed using SPSS version 20. The result of data analysis indicates that students have positive attitudes towards WhatsApp as a tool for Instructor-Learner Dialogue: it easy to reach the lecturer (4.07), the instructor gives me valuable feedback on my assignment (4.02), the instructor is supportive during course discussion and offers continuous support with the class (4.00). Learner-Content Dialogue: WhatsApp allows me to academically engage with lecturers anytime, anywhere (4.00), it helps to send graphics such as pictures or charts directly to the students (3.98), it also provides out of class, extra learning materials and homework (3.96), and Learner-Learner Dialogue: WhatsApp is a good tool for sharing knowledge with others (4.09), WhatsApp allows me to academically engage with peers anytime, anywhere (4.07), and we can interact with others through the use of group discussion (4.02). It was also found that there are significant positive correlations between students’ perceptions of Instructor-Learner Dialogue (ILD), Learner-Content Dialogue (LCD), Learner-Learner Dialogue (LLD) and WhatsApp Application in classroom. The findings of the study have implications for lectures, policy makers and curriculum developers.
Keywords: Instructor-learner dialogue, learners-contents dialogue, learner-learner dialogue, WhatsApp.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 680110 A Study on Integrated Performance of Tap-Changing Transformer and SVC in Association with Power System Voltage Stability
Authors: Mahmood Reza Shakarami, Reza Sedaghati
Abstract:
Electricity market activities and a growing demand for electricity have led to heavily stressed power systems. This requires operation of the networks closer to their stability limits. Power system operation is affected by stability related problems, leading to unpredictable system behavior. Voltage stability refers to the ability of a power system to sustain appropriate voltage levels through large and small disturbances. Steady-state voltage stability is concerned with limits on the existence of steady-state operating points for the network. FACTS devices can be utilized to increase the transmission capacity, the stability margin and dynamic behavior or serve to ensure improved power quality. Their main capabilities are reactive power compensation, voltage control and power flow control. Among the FACTS controllers, Static Var Compensator (SVC) provides fast acting dynamic reactive compensation for voltage support during contingency events. In this paper, voltage stability assessment with appropriate representations of tap-changer transformers and SVC is investigated. Integrating both of these devices is the main topic of this paper. Effect of the presence of tap-changing transformers on static VAR compensator controller parameters and ratings necessary to stabilize load voltages at certain values are highlighted. The interrelation between transformer off nominal tap ratios and the SVC controller gains and droop slopes and the SVC rating are found. P-V curves are constructed to calculate loadability margins.
Keywords: SVC, voltage stability, P-V curve, reactive power, tap changing transformer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3022109 Construction Unit Rate Factor Modelling Using Neural Networks
Authors: Balimu Mwiya, Mundia Muya, Chabota Kaliba, Peter Mukalula
Abstract:
Factors affecting construction unit cost vary depending on a country’s political, economic, social and technological inclinations. Factors affecting construction costs have been studied from various perspectives. Analysis of cost factors requires an appreciation of a country’s practices. Identified cost factors provide an indication of a country’s construction economic strata. The purpose of this paper is to identify the essential factors that affect unit cost estimation and their breakdown using artificial neural networks. Twenty five (25) identified cost factors in road construction were subjected to a questionnaire survey and employing SPSS factor analysis the factors were reduced to eight. The 8 factors were analysed using neural network (NN) to determine the proportionate breakdown of the cost factors in a given construction unit rate. NN predicted that political environment accounted 44% of the unit rate followed by contractor capacity at 22% and financial delays, project feasibility and overhead & profit each at 11%. Project location, material availability and corruption perception index had minimal impact on the unit cost from the training data provided. Quantified cost factors can be incorporated in unit cost estimation models (UCEM) to produce more accurate estimates. This can create improvements in the cost estimation of infrastructure projects and establish a benchmark standard to assist the process of alignment of work practises and training of new staff, permitting the on-going development of best practises in cost estimation to become more effective.
Keywords: Construction cost factors, neural networks, roadworks, Zambian Construction Industry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3824108 A Research on the Coordinated Development of Chengdu-Chongqing Economic Circle Under the Background of New Urbanization
Authors: Deng Tingting
Abstract:
The coordinated and integrated development of regions is an inevitable requirement for China to move towards high-quality sustainable development. As one of the regions with the best economic foundation and the strongest economic strength in the western China, it is a typical area with national importance and strong network connection characteristics in terms of the comprehensive effect of linking the inland hinterland and connecting the western and national urban networks. The integrated development of the Chengdu-Chongqing economic circle is of great strategic significance for the rapid and high-quality development of the western region. In the context of new urbanization, this paper takes 16 urban units within the economic circle as the research object, based on the 5-year panel data of population, regional economy and spatial construction and development from 2016 to 2020, using the entropy method and Theil index to analyze the three target layers, and cause analysis. The research shows that there are temporal and spatial differences in the Chengdu-Chongqing economic circle, and there are significant differences between the core city and the surrounding cities. Therefore, by reforming and innovating the regional coordinated development mechanism, breaking administrative barriers, and strengthening the "polar nucleus" radiation function to release the driving force for economic development, especially in the gully areas of economic development belts, will not only promote the coordinated development of internal regions, but also promote the coordinated and sustainable development of the western region and toward a high-quality development path.
Keywords: Chengdu-Chongqing economic circle, new urbanization, coordinated regional development, Theil Index.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202107 Modeling and Simulation of Overcurrent and Earth Fault Relay with Inverse Definite Minimum Time
Authors: Win Win Tun, Han Su Yin, Ohn Zin Lin
Abstract:
Transmission networks are an important part of an electric power system. The transmission lines not only have high power transmission capacity but also they are prone of larger magnitudes. Different types of faults occur in transmission lines such as single line to ground (L-G) fault, double line to ground (L-L-G) fault, line to line (L-L) fault and three phases (L-L-L) fault. These faults are needed to be cleared quickly in order to reduce damage caused to the system and they have high impact on the electrical power system equipment’s which are connected in transmission line. The main fault in transmission line is L-G fault. Therefore, protection relays are needed to protect transmission line. Overcurrent and earth fault relay is an important relay used to protect transmission lines, distribution feeders, transformers and bus couplers etc. Sometimes these relays can be used as main protection or backup protection. The modeling of protection relays is important to indicate the effects of network parameters and configurations on the operation of relays. Therefore, the modeling of overcurrent and earth fault relay is described in this paper. The overcurrent and earth fault relays with standard inverse definite minimum time are modeled and simulated by using MATLAB/Simulink software. The developed model was tested with L-G, L-L-G, L-L and L-L-L faults with various fault locations and fault resistance (0.001Ω). The simulation results are obtained by MATLAB software which shows the feasibility of analysis of transmission line protection with overcurrent and earth fault relay.
Keywords: Transmission line, overcurrent and earth fault relay, standard inverse definite minimum time, various faults, MATLAB Software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 993106 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning
Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar
Abstract:
As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling. The research proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling. The paper concludes the challenges and improvement directions for Deep Reinforcement Learning-based resource scheduling algorithms.
Keywords: Resource scheduling, deep reinforcement learning, distributed system, artificial intelligence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 495105 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.
Keywords: Conditional Generative Adversarial Net, market and credit risk management, neural network, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1199104 A Low-Power Two-Stage Seismic Sensor Scheme for Earthquake Early Warning System
Authors: Arvind Srivastav, Tarun Kanti Bhattacharyya
Abstract:
The north-eastern, Himalayan, and Eastern Ghats Belt of India comprise of earthquake-prone, remote, and hilly terrains. Earthquakes have caused enormous damages in these regions in the past. A wireless sensor network based earthquake early warning system (EEWS) is being developed to mitigate the damages caused by earthquakes. It consists of sensor nodes, distributed over the region, that perform majority voting of the output of the seismic sensors in the vicinity, and relay a message to a base station to alert the residents when an earthquake is detected. At the heart of the EEWS is a low-power two-stage seismic sensor that continuously tracks seismic events from incoming three-axis accelerometer signal at the first-stage, and, in the presence of a seismic event, triggers the second-stage P-wave detector that detects the onset of P-wave in an earthquake event. The parameters of the P-wave detector have been optimized for minimizing detection time and maximizing the accuracy of detection.Working of the sensor scheme has been verified with seven earthquakes data retrieved from IRIS. In all test cases, the scheme detected the onset of P-wave accurately. Also, it has been established that the P-wave onset detection time reduces linearly with the sampling rate. It has been verified with test data; the detection time for data sampled at 10Hz was around 2 seconds which reduced to 0.3 second for the data sampled at 100Hz.Keywords: Earthquake early warning system, EEWS, STA/LTA, polarization, wavelet, event detector, P-wave detector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 781103 User-Perceived Quality Factors for Certification Model of Web-Based System
Authors: Jamaiah H. Yahaya, Aziz Deraman, Abdul Razak Hamdan, Yusmadi Yah Jusoh
Abstract:
One of the most essential issues in software products is to maintain it relevancy to the dynamics of the user’s requirements and expectation. Many studies have been carried out in quality aspect of software products to overcome these problems. Previous software quality assessment models and metrics have been introduced with strengths and limitations. In order to enhance the assurance and buoyancy of the software products, certification models have been introduced and developed. From our previous experiences in certification exercises and case studies collaborating with several agencies in Malaysia, the requirements for user based software certification approach is identified and demanded. The emergence of social network applications, the new development approach such as agile method and other varieties of software in the market have led to the domination of users over the software. As software become more accessible to the public through internet applications, users are becoming more critical in the quality of the services provided by the software. There are several categories of users in web-based systems with different interests and perspectives. The classifications and metrics are identified through brain storming approach with includes researchers, users and experts in this area. The new paradigm in software quality assessment is the main focus in our research. This paper discusses the classifications of users in web-based software system assessment and their associated factors and metrics for quality measurement. The quality model is derived based on IEEE structure and FCM model. The developments are beneficial and valuable to overcome the constraints and improve the application of software certification model in future.
Keywords: Software certification model, user centric approach, software quality factors, metrics and measurements, web-based system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2148102 Expanding Affordable Housing through Inclusionary Zoning in the City of Toronto
Authors: Sam Moshaver
Abstract:
Reasonably priced and well-constructed housing must be an integral and element supporting a healthy society. The absence of housing everyone in society can afford negatively affects the people's health, education, ability to get jobs, develop their community. Without access to decent housing, economic development, integration of immigrants and inclusiveness, the society is negatively impacted. Canada has a sterling record in creating housing compared to many other nations around the globe. Canadian housing gets support from a mature and responsive mortgage network and a top-quality construction industry as well as safe and excellent quality building materials that are readily available. Yet 1.7 million Canadian households occupy substandard abodes. During the past hundred years, Canada's government has made a wide variety of attempts to provide decent residential facilities every Canadian can afford. Despite these laudable efforts, today Canada is left with housing that is inadequate for many Canadians. People who own their housing are given all kinds of privileges and perks, while people with relatively low incomes who rent their apartments or houses are discriminated against. To help solve these problems, zoning that is based on an "inclusionary" philosophy is tool developed to help provide people the affordable residences that they need. No, thirty years after its introduction, this type of zoning has been shown effective in helping build and provide Canadians with a houses or apartments they can afford to pay for. Using this form of zoning can have different results +depending on where and how it is used. After examining Canadian affordable housing and four American cases where this type of zoning was enforced in the USA, this makes various recommendations for expanding Canadians' access to housing they can afford.Keywords: Affordable Housing, Inclusionary Zoning Low- Income Housing, Toronto Housing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2066101 Time Series Forecasting Using Various Deep Learning Models
Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan
Abstract:
Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed length window in the past as an explicit input. In this paper, we study how the performance of predictive models change as a function of different look-back window sizes and different amounts of time to predict into the future. We also consider the performance of the recent attention-based transformer models, which had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (Recurrent Neural Network (RNN), Long Short-term Memory (LSTM), Gated Recurrent Units (GRU), and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the website of University of California, Irvine (UCI), which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Absolute Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.
Keywords: Air quality prediction, deep learning algorithms, time series forecasting, look-back window.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1167100 Interest of the Sequences Pseudo Noises Codes of Different Lengths for the Reduction from the Interference between Users of CDMA Network
Authors: Nerguè Kassahan Kone, Souleymane Oumtanaga
Abstract:
The third generation (3G) of cellular system adopted the spread spectrum as solution for the transmission of the data in the physical layer. Contrary to systems IS-95 or CDMAOne (systems with spread spectrum of the preceding generation), the new standard, called Universal Mobil Telecommunications System (UMTS), uses long codes in the down link. The system is conceived for the vocal communication and the transmission of the data. In particular, the down link is very important, because of the asymmetrical request of the data, i.e., more remote loading towards the mobiles than towards the basic station. Moreover, the UMTS uses for the down link an orthogonal spreading out with a variable factor of spreading out (OVSF for Orthogonal Variable Spreading Factor). This characteristic makes it possible to increase the flow of data of one or more users by reducing their factor of spreading out without changing the factor of spreading out of other users. In the current standard of the UMTS, two techniques to increase the performances of the down link were proposed, the diversity of sending antenna and the codes space-time. These two techniques fight only fainding. The receiver proposed for the mobil station is the RAKE, but one can imagine a receiver more sophisticated, able to reduce the interference between users and the impact of the coloured noise and interferences to narrow band. In this context, where the users have long codes synchronized with variable factor of spreading out and ignorance by the mobile of the other active codes/users, the use of the sequences of code pseudo-noises different lengths is presented in the form of one of the most appropriate solutions.Keywords: DS-CDMA, multiple access interference, ratio Signal / interference + Noise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 135199 Modeling the Influence of Socioeconomic and Land-Use Factors on Mode Choice: A Comparison of Riyadh, Saudi Arabia, and Melbourne, Australia
Authors: M. Alqhatani, S. Bajwa, S. Setunge
Abstract:
Metropolitan areas have suffered from traffic problems, which have steadily increased in many monocentric cities. Urban expansion, population growth, and road network development have resulted in a structural shift toward urban sprawl, increasing commuters’ dependence on private modes of transport. This paper aims to model the influence of socioeconomic and land-use factors on mode choice using a multinomial and nested logit model. Land-use patterns—such as residential, commercial, retail, educational and employment related—affect the choice of mode and destination in the short and medium term. Socioeconomic factors—such as age, gender, income, household size, and house type—also affect choice, while residential location is affected in the long term. Riyadh in Saudi Arabia and Melbourne in Australia were chosen as case studies. Riyadh is a car-dependent city with limited public transport, whereas Melbourne has good public transport but an increase in car dependence. Aggregate level land-use data and disaggregate level individual, household, and journey-to-work data are used to determine the effects of land use and socioeconomic factors on mode choice. The model results determined that urban sprawl is the main factor that affects mode choice, income, and house type.
Keywords: Socioeconomic, land use, mode choice, multinomial logit and nested logit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2446