Search results for: complex fuzzy semigroups.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2446

Search results for: complex fuzzy semigroups.

286 Evaluation of Model and Performance of Fuel Cell Hybrid Electric Vehicle in Different Drive Cycles

Authors: Fathollah Ommi, Golnaz Pourabedin, Koros Nekofa

Abstract:

In recent years fuel cell vehicles are rapidly appearing all over the globe. In less than 10 years, fuel cell vehicles have gone from mere research novelties to operating prototypes and demonstration models. At the same time, government and industry in development countries have teamed up to invest billions of dollars in partnerships intended to commercialize fuel cell vehicles within the early years of the 21st century. The purpose of this study is evaluation of model and performance of fuel cell hybrid electric vehicle in different drive cycles. A fuel cell system model developed in this work is a semi-experimental model that allows users to use the theory and experimental relationships in a fuel cell system. The model can be used as part of a complex fuel cell vehicle model in advanced vehicle simulator (ADVISOR). This work reveals that the fuel consumption and energy efficiency vary in different drive cycles. Arising acceleration and speed in a drive cycle leads to Fuel consumption increase. In addition, energy losses in drive cycle relates to fuel cell system power request. Parasitic power in different parts of fuel cell system will increase when power request increases. Finally, most of energy losses in drive cycle occur in fuel cell system because of producing a lot of energy by fuel cell stack.

Keywords: Drive cycle, Energy efficiency, energy consumption, Fuel cell system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
285 Multi Task Scheme to Monitor Multivariate Environments Using Artificial Neural Network

Authors: K. Atashgar

Abstract:

When an assignable cause(s) manifests itself to a multivariate process and the process shifts to an out-of-control condition, a root-cause analysis should be initiated by quality engineers to identify and eliminate the assignable cause(s) affected the process. A root-cause analysis in a multivariate process is more complex compared to a univariate process. In the case of a process involved several correlated variables an effective root-cause analysis can be only experienced when it is possible to identify the required knowledge including the out-of-control condition, the change point, and the variable(s) responsible to the out-of-control condition, all simultaneously. Although literature addresses different schemes to monitor multivariate processes, one can find few scientific reports focused on all the required knowledge. To the best of the author’s knowledge this is the first time that a multi task model based on artificial neural network (ANN) is reported to monitor all the required knowledge at the same time for a multivariate process with more than two correlated quality characteristics. The performance of the proposed scheme is evaluated numerically when different step shifts affect the mean vector. Average run length is used to investigate the performance of the proposed multi task model. The simulated results indicate the multi task scheme performs all the required knowledge effectively.

Keywords: Artificial neural network, Multivariate process, Statistical process control, Change point.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1623
284 Ice Load Measurements on Known Structures Using Image Processing Methods

Authors: Azam Fazelpour, Saeed R. Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

This study employs a method based on image analyses and structure information to detect accumulated ice on known structures. The icing of marine vessels and offshore structures causes significant reductions in their efficiency and creates unsafe working conditions. Image processing methods are used to measure ice loads automatically. Most image processing methods are developed based on captured image analyses. In this method, ice loads on structures are calculated by defining structure coordinates and processing captured images. A pyramidal structure is designed with nine cylindrical bars as the known structure of experimental setup. Unsymmetrical ice accumulated on the structure in a cold room represents the actual case of experiments. Camera intrinsic and extrinsic parameters are used to define structure coordinates in the image coordinate system according to the camera location and angle. The thresholding method is applied to capture images and detect iced structures in a binary image. The ice thickness of each element is calculated by combining the information from the binary image and the structure coordinate. Averaging ice diameters from different camera views obtains ice thicknesses of structure elements. Comparison between ice load measurements using this method and the actual ice loads shows positive correlations with an acceptable range of error. The method can be applied to complex structures defining structure and camera coordinates.

Keywords: Camera calibration, Ice detection, ice load measurements, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1207
283 Visualization and Indexing of Spectral Databases

Authors: Tibor Kulcsar, Gabor Sarossy, Gabor Bereznai, Robert Auer, Janos Abonyi

Abstract:

On-line (near infrared) spectroscopy is widely used to support the operation of complex process systems. Information extracted from spectral database can be used to estimate unmeasured product properties and monitor the operation of the process. These techniques are based on looking for similar spectra by nearest neighborhood algorithms and distance based searching methods. Search for nearest neighbors in the spectral space is an NP-hard problem, the computational complexity increases by the number of points in the discrete spectrum and the number of samples in the database. To reduce the calculation time some kind of indexing could be used. The main idea presented in this paper is to combine indexing and visualization techniques to reduce the computational requirement of estimation algorithms by providing a two dimensional indexing that can also be used to visualize the structure of the spectral database. This 2D visualization of spectral database does not only support application of distance and similarity based techniques but enables the utilization of advanced clustering and prediction algorithms based on the Delaunay tessellation of the mapped spectral space. This means the prediction has not to use the high dimension space but can be based on the mapped space too. The results illustrate that the proposed method is able to segment (cluster) spectral databases and detect outliers that are not suitable for instance based learning algorithms.

Keywords: indexing high dimensional databases, dimensional reduction, clustering, similarity, k-nn algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1726
282 Quantifying the Second-Level Digital Divide on Sub-National Level

Authors: Vladimir Korovkin, Albert Park, Evgeny Kaganer

Abstract:

Digital divide, the gap in the access to the world of digital technologies and the socio-economic opportunities that they create is an important phenomenon of the XXI century. This gap may exist between countries, regions within a country or socio-demographic groups, creating the classes of “digital have and have nots”. While the 1st-level divide (the difference in opportunities to access the digital networks) was demonstrated to diminish with time, the issues of 2nd level divide (the difference in skills and usage of digital systems) and 3rd level divide (the difference in effects obtained from digital technology) may grow. The paper offers a systemic review of literature on the measurement of the digital divide, noting the certain conceptual stagnation due to the lack of effective instruments that would capture the complex nature of the phenomenon. As a result, many important concepts do not receive the empiric exploration they deserve. As a solution the paper suggests a composite Digital Life Index, that studies separately the digital supply and demand across seven independent dimensions providing for 14 subindices. The Index is based on Internet-borne data, a distinction from traditional research approaches that rely on official statistics or surveys. The application of the model to the study of the digital divide between Russian regions and between cities in China have brought promising results. The paper advances the existing methodological literature on the 2nd level digital divide and can also inform practical decision-making regarding the strategies of national and regional digital development.

Keywords: Digital transformation, second-level digital divide, composite index, digital policy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 378
281 Automatic Road Network Recognition and Extraction for Urban Planning

Authors: D. B. L. Bong, K.C. Lai, A. Joseph

Abstract:

The uses of road map in daily activities are numerous but it is a hassle to construct and update a road map whenever there are changes. In Universiti Malaysia Sarawak, research on Automatic Road Extraction (ARE) was explored to solve the difficulties in updating road map. The research started with using Satellite Image (SI), or in short, the ARE-SI project. A Hybrid Simple Colour Space Segmentation & Edge Detection (Hybrid SCSS-EDGE) algorithm was developed to extract roads automatically from satellite-taken images. In order to extract the road network accurately, the satellite image must be analyzed prior to the extraction process. The characteristics of these elements are analyzed and consequently the relationships among them are determined. In this study, the road regions are extracted based on colour space elements and edge details of roads. Besides, edge detection method is applied to further filter out the non-road regions. The extracted road regions are validated by using a segmentation method. These results are valuable for building road map and detecting the changes of the existing road database. The proposed Hybrid Simple Colour Space Segmentation and Edge Detection (Hybrid SCSS-EDGE) algorithm can perform the tasks fully automatic, where the user only needs to input a high-resolution satellite image and wait for the result. Moreover, this system can work on complex road network and generate the extraction result in seconds.

Keywords: Road Network Recognition, Colour Space, Edge Detection, Urban Planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2944
280 Comparative Safety Performance Evaluation of Profiled Deck Composite Slab from the Use of Slope-Intercept and Partial Shear Methods

Authors: Izian Abd. Karim, Kachalla Mohammed, Nora Farah A. A. Aziz, Law Teik Hua

Abstract:

The economic use and ease of construction of profiled deck composite slab is marred with the complex and un-economic strength verification required for the serviceability and general safety considerations. Beside these, albeit factors such as shear span length, deck geometries and mechanical frictions greatly influence the longitudinal shear strength, that determines the ultimate strength of profiled deck composite slab, and number of methods available for its determination; partial shear and slope-intercept are the two methods according to Euro-code 4 provision. However, the complexity associated with shear behavior of profiled deck composite slab, the use of these methods in determining the load carrying capacities of such slab yields different and conflicting values. This couple with the time and cost constraint associated with the strength verification is a source of concern that draws more attentions nowadays, the issue is critical. Treating some of these known shear strength influencing factors as random variables, the load carrying capacity violation of profiled deck composite slab from the use of the two-methods defined according to Euro-code 4 are determined using reliability approach, and comparatively studied. The study reveals safety values from the use of m-k method shows good standing compared with that from the partial shear method.

Keywords: Composite slab, first order reliability method, longitudinal shear, partial shear connection, slope-intercept.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
279 Sensitivity Analysis of Principal Stresses in Concrete Slab of Rigid Pavement Made From Recycled Materials

Authors: Aleš Florian, Lenka Ševelová

Abstract:

Complex sensitivity analysis of stresses in a concrete slab of the real type of rigid pavement made from recycled materials is performed. The computational model of the pavement is designed as a spatial (3D) model, is based on a nonlinear variant of the finite element method that respects the structural nonlinearity, enables to model different arrangements of joints, and the entire model can be loaded by the thermal load. Interaction of adjacent slabs in joints and contact of the slab and the subsequent layer are modeled with the help of special contact elements. Four concrete slabs separated by transverse and longitudinal joints and the additional structural layers and soil to the depth of about 3m are modeled. The thickness of individual layers, physical and mechanical properties of materials, characteristics of joints, and the temperature of the upper and lower surface of slabs are supposed to be random variables. The modern simulation technique Updated Latin Hypercube Sampling with 20 simulations is used. For sensitivity analysis the sensitivity coefficient based on the Spearman rank correlation coefficient is utilized. As a result, the estimates of influence of random variability of individual input variables on the random variability of principal stresses s1 and s3 in 53 points on the upper and lower surface of the concrete slabs are obtained.

Keywords: Concrete, FEM, pavement, sensitivity, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2092
278 Assessment Power and Frequency Oscillation Damping Using POD Controller and Proposed FOD Controller

Authors: Yahya Naderi, Tohid Rahimi, Babak Yousefi, Seyed Hossein Hosseini

Abstract:

Today’s modern interconnected power system is highly complex in nature. In this, one of the most important requirements during the operation of the electric power system is the reliability and security. Power and frequency oscillation damping mechanism improve the reliability. Because of power system stabilizer (PSS) low speed response against of major fault such as three phase short circuit, FACTs devise that can control the network condition in very fast time, are becoming popular. But FACTs capability can be seen in a major fault present when nonlinear models of FACTs devise and power system equipment are applied. To realize this aim, the model of multi-machine power system with FACTs controller is developed in MATLAB/SIMULINK using Sim Power System (SPS) blockiest. Among the FACTs device, Static synchronous series compensator (SSSC) due to high speed changes its reactance characteristic inductive to capacitive, is effective power flow controller. Tuning process of controller parameter can be performed using different method. But Genetic Algorithm (GA) ability tends to use it in controller parameter tuning process. In this paper firstly POD controller is used to power oscillation damping. But in this station, frequency oscillation dos not has proper damping situation. So FOD controller that is tuned using GA is using that cause to damp out frequency oscillation properly and power oscillation damping has suitable situation.

Keywords: Power oscillation damping (POD), frequency oscillation damping (FOD), Static synchronous series compensator (SSSC), Genetic Algorithm (GA).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3114
277 Relationship between Personality Traits and Postural Stability among Czech Military Combat Troops

Authors: K. Rusnakova, D. Gerych, M. Stehlik

Abstract:

Postural stability is a complex process involving actions of biomechanical, motor, sensory and central nervous system components. Numerous joint systems, muscles involved, the complexity of sporting movements and situations require perfect coordination of the body's movement patterns. To adapt to a constantly changing situation in such a dynamic environment as physical performance, optimal input of information from visual, vestibular and somatosensory sensors are needed. Combat soldiers are required to perform physically and mentally demanding tasks in adverse conditions, and poor postural stability has been identified as a risk factor for lower extremity musculoskeletal injury. The aim of this study is to investigate whether some personality traits are related to the performance of static postural stability among soldiers of combat troops. NEO personality inventory (NEO-PI-R) was used to identify personality traits and the Nintendo Wii Balance Board was used to assess static postural stability of soldiers. Postural stability performance was assessed by changes in center of pressure (CoP) and center of gravity (CoG). A posturographic test was performed for 60 s with eyes opened during quiet upright standing. The results showed that facets of neuroticism and conscientiousness personality traits were significantly correlated with measured parameters of CoP and CoG. This study can help for better understanding the relationship between personality traits and static postural stability. The results can be used to optimize the training process at the individual level.

Keywords: Neuroticism, conscientiousness, postural stability, combat troops.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 456
276 Latent Factors of Severity in Truck-Involved and Non-Truck-Involved Crashes on Freeways

Authors: Shin-Hyung Cho, Dong-Kyu Kim, Seung-Young Kho

Abstract:

Truck-involved crashes have higher crash severity than non-truck-involved crashes. There have been many studies about the frequency of crashes and the development of severity models, but those studies only analyzed the relationship between observed variables. To identify why more people are injured or killed when trucks are involved in the crash, we must examine to quantify the complex causal relationship between severity of the crash and risk factors by adopting the latent factors of crashes. The aim of this study was to develop a structural equation or model based on truck-involved and non-truck-involved crashes, including five latent variables, i.e. a crash factor, environmental factor, road factor, driver’s factor, and severity factor. To clarify the unique characteristics of truck-involved crashes compared to non-truck-involved crashes, a confirmatory analysis method was used. To develop the model, we extracted crash data from 10,083 crashes on Korean freeways from 2008 through 2014. The results showed that the most significant variable affecting the severity of a crash is the crash factor, which can be expressed by the location, cause, and type of the crash. For non-truck-involved crashes, the crash and environment factors increase severity of the crash; conversely, the road and driver factors tend to reduce severity of the crash. For truck-involved crashes, the driver factor has a significant effect on severity of the crash although its effect is slightly less than the crash factor. The multiple group analysis employed to analyze the differences between the heterogeneous groups of drivers.

Keywords: Crash severity, structural equation modeling, truck-involved crashes, multiple group analysis, crash on freeway.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1284
275 Building Information Modeling and Its Application in the State of Kuwait

Authors: Michael Gerges, Ograbe Ahiakwo, Martin Jaeger, Ahmad Asaad

Abstract:

Recent advances of Building Information Modeling (BIM) especially in the Middle East have increased remarkably. Dubai has been taking a lead on this by making it mandatory for BIM to be adopted for all projects that involve complex architecture designs. This is because BIM is a dynamic process that assists all stakeholders in monitoring the project status throughout different project phases with great transparency. It focuses on utilizing information technology to improve collaboration among project participants during the entire life cycle of the project from the initial design, to the supply chain, resource allocation, construction and all productivity requirements. In view of this trend, the paper examines the extent of applying BIM in the State of Kuwait, by exploring practitioners’ perspectives on BIM, especially their perspectives on main barriers and main advantages. To this end structured interviews were carried out based on questionnaires and with a range of different construction professionals. The results revealed that practitioners perceive improved communication and mitigated project risks by encouraged collaboration between project participants. However, it was also observed that the full implementation of BIM in the State of Kuwait requires concerted efforts to make clients demanding BIM, counteract resistance to change among construction professionals and offer more training for design team members. This paper forms part of an on-going research effort on BIM and its application in the State of Kuwait and it is on this basis that further research on the topic is proposed.

Keywords: Building Information Modeling, BIM, construction industry, Kuwait.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2894
274 Foreign Languages and Employability in the EU

Authors: Paulina Pietrzyk-Kowalec

Abstract:

This paper presents the phenomenon of multilingualism becoming the norm rather than the exception in the European Union. It also seeks to describe the correlation between the command of foreign languages and employability. It is evident that the challenges of today's societies when it comes to employability are more and more complex. Thus, it is one of the crucial tasks of higher education to prepare its students to face this kind of complexity, understand its nuances, and have the capacity to adapt effectively to situations that are common in corporations based in the countries belonging to the EU. From this point of view, the assessment of the impact that the command of foreign languages of European university students could have on the numerous business sectors becomes vital. It also involves raising awareness of future professionals to make them understand the importance of mastering communicative skills in foreign languages that will meet the requirements of students' prospective employers. The direct connection between higher education institutions and the world of business also allows companies to realize that they should rethink their recruitment and human resources procedures in order to take into account the importance of foreign languages. This article focuses on the objective of the multilingualism policy developed by the European Commission, which is to enable young people to master at least two foreign languages, which is crucial in their future careers. The article puts emphasis on the existence of a crucial connection between the research conducted in higher education institutions and the business sector in order to reduce current qualification gaps.

Keywords: Cross-cultural communication, employability, human resources, language attitudes, multilingualism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 294
273 Implication of Taliban’s Recent Relationship with Neighboring Countries and Its Impact on the Current Peace Process

Authors: Lutfurahman Aftab

Abstract:

The Taliban’s relationships with the neighboring countries are a complex political issue that local people interpret one way, and politicians have different perceptions; therefore, it is a current issue that needs to be analyzed broadly and impartially. In this article, we investigate the Taliban’s current relationships with the neighboring countries, as well as look at the effects these relationships have on the current peace negotiations in Doha, which began on September 12, 2020. The issue of Taliban and the current peace process has turned to be the center-of-attention for most of the neighboring countries, and every country has opened new pages in their foreign policies because after the Taliban-US peace agreement, the neighboring countries are meticulously and closely observing the situation and they believe that the Taliban is on the verge to tighten their grips on the future political power of Afghanistan. Every neighboring country of Afghanistan has political, economic, and social interests in this land-locked country. The Taliban’s current role within the peace talks and anticipated future position within the Afghan government will have great political, economic, and social implications on countries in the region as they assess their foreign policies. As these countries move to form closer ties with the Taliban, the government of Afghanistan is worried that this may hinder the peace process. Afghanistan has long blamed Pakistan for sheltering the Taliban and providing safe havens for the terrorist groups, including Al Qaeda, and the recent visits of Taliban’s delegations to Islamabad, Pakistan, have raised concern among government officials in Afghanistan who believe that the Taliban is not independent in their decisions, and for every step they take, are consulting with Pakistan’s political leadership.

Keywords: peace process, USA, Afghanistan, Taliban

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 373
272 A Metric-Set and Model Suggestion for Better Software Project Cost Estimation

Authors: Murat Ayyıldız, Oya Kalıpsız, Sırma Yavuz

Abstract:

Software project effort estimation is frequently seen as complex and expensive for individual software engineers. Software production is in a crisis. It suffers from excessive costs. Software production is often out of control. It has been suggested that software production is out of control because we do not measure. You cannot control what you cannot measure. During last decade, a number of researches on cost estimation have been conducted. The metric-set selection has a vital role in software cost estimation studies; its importance has been ignored especially in neural network based studies. In this study we have explored the reasons of those disappointing results and implemented different neural network models using augmented new metrics. The results obtained are compared with previous studies using traditional metrics. To be able to make comparisons, two types of data have been used. The first part of the data is taken from the Constructive Cost Model (COCOMO'81) which is commonly used in previous studies and the second part is collected according to new metrics in a leading international company in Turkey. The accuracy of the selected metrics and the data samples are verified using statistical techniques. The model presented here is based on Multi-Layer Perceptron (MLP). Another difficulty associated with the cost estimation studies is the fact that the data collection requires time and care. To make a more thorough use of the samples collected, k-fold, cross validation method is also implemented. It is concluded that, as long as an accurate and quantifiable set of metrics are defined and measured correctly, neural networks can be applied in software cost estimation studies with success

Keywords: Software Metrics, Software Cost Estimation, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
271 Liquid-Liquid Equilibrium for the Binary Mixtures of α-Pinene + Water and α-Terpineol + Water

Authors: Herti Utami, Sutijan, Roto, Wahyudi Budi Sediawan

Abstract:

α-Pinene is the main component of the most turpentine oils. The hydration of α-pinene with acid catalysts leads to a complex mixture of monoterpenes. In order to obtain more valuable products, the α-pinene in the turpentine can be hydrated in dilute mineral acid solutions to produce α-terpineol. The design of separation processes requires information on phase equilibrium and related thermodynamic properties. This paper reports the results of study on liquid-liquid equilibrium (LLE) of system containing α- pinene + water and α-terpineol + water. Binary LLE for α-pinene + water system, and α-terpineol + water systems were determined by experiment at 301K and atmospheric pressure. The two component mixture was stirred for about 30min, then the mixture was left for about 2h for complete phase separation. The composition of both phases was analyzed by using a Gas Chromatograph. The experimental data were correlated by considering both NRTL and UNIQUAC activity coefficient models. The LLE data for the system of α-pinene + water and α-terpineol + water were correlated successfully by the NRTL model. The experimental data were not satisfactorily fitted by the UNIQUAC model. The NRTL model (α =0.3) correlates the LLE data for the system of α-pinene + water at 301K with RMSD of 0.0404%. And the NRTL model (α =0.61) at 301K with RMSD of 0.0058 %. The NRTL model (α =0.3) correlates the LLE data for the system of α- terpineol + water at 301K with RMSD of 0.1487% and the NRTL model (α =0.6) at 301K with RMSD of 0.0032%, between the experimental and calculated mole fractions.

Keywords: α-Pinene, α-Terpineol, Liquid-liquid Equilibrium, NRTL model, UNIQUAC model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4927
270 Design of QFT-Based Self-Tuning Deadbeat Controller

Authors: H. Mansor, S. B. Mohd Noor

Abstract:

This paper presents a design method of self-tuning Quantitative Feedback Theory (QFT) by using improved deadbeat control algorithm. QFT is a technique to achieve robust control with pre-defined specifications whereas deadbeat is an algorithm that could bring the output to steady state with minimum step size. Nevertheless, usually there are large peaks in the deadbeat response. By integrating QFT specifications into deadbeat algorithm, the large peaks could be tolerated. On the other hand, emerging QFT with adaptive element will produce a robust controller with wider coverage of uncertainty. By combining QFT-based deadbeat algorithm and adaptive element, superior controller that is called selftuning QFT-based deadbeat controller could be achieved. The output response that is fast, robust and adaptive is expected. Using a grain dryer plant model as a pilot case-study, the performance of the proposed method has been evaluated and analyzed. Grain drying process is very complex with highly nonlinear behaviour, long delay, affected by environmental changes and affected by disturbances. Performance comparisons have been performed between the proposed self-tuning QFT-based deadbeat, standard QFT and standard dead-beat controllers. The efficiency of the self-tuning QFTbased dead-beat controller has been proven from the tests results in terms of controller’s parameters are updated online, less percentage of overshoot and settling time especially when there are variations in the plant.

Keywords: Deadbeat control, quantitative feedback theory (QFT), robust control, self-tuning control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2292
269 An Overview of Electronic Waste as Aggregate in Concrete

Authors: S. R. Shamili, C. Natarajan, J. Karthikeyan

Abstract:

Rapid growth of world population and widespread urbanization has remarkably increased the development of the construction industry which caused a huge demand for sand and gravels. Environmental problems occur when the rate of extraction of sand, gravels, and other materials exceeds the rate of generation of natural resources; therefore, an alternative source is essential to replace the materials used in concrete. Now-a-days, electronic products have become an integral part of daily life which provides more comfort, security, and ease of exchange of information. These electronic waste (E-Waste) materials have serious human health concerns and require extreme care in its disposal to avoid any adverse impacts. Disposal or dumping of these E-Wastes also causes major issues because it is highly complex to handle and often contains highly toxic chemicals such as lead, cadmium, mercury, beryllium, brominates flame retardants (BFRs), polyvinyl chloride (PVC), and phosphorus compounds. Hence, E-Waste can be incorporated in concrete to make a sustainable environment. This paper deals with the composition, preparation, properties, classification of E-Waste. All these processes avoid dumping to landfills whilst conserving natural aggregate resources, and providing a better environmental option. This paper also provides a detailed literature review on the behaviour of concrete with incorporation of E-Wastes. Many research shows the strong possibility of using E-Waste as a substitute of aggregates eventually it reduces the use of natural aggregates in concrete.

Keywords: Disposal, electronic waste, landfill, toxic chemicals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2771
268 A Cost Effective Approach to Develop Mid-size Enterprise Software Adopted the Waterfall Model

Authors: M. N. Hasnine, M. K. H. Chayon, M. M. Rahman

Abstract:

Organizational tendencies towards computer-based information processing have been observed noticeably in the third-world countries. Many enterprises are taking major initiatives towards computerized working environment because of massive benefits of computer-based information processing. However, designing and developing information resource management software for small and mid-size enterprises under budget costs and strict deadline is always challenging for software engineers. Therefore, we introduced an approach to design mid-size enterprise software by using the Waterfall model, which is one of the SDLC (Software Development Life Cycles), in a cost effective way. To fulfill research objectives, in this study, we developed mid-sized enterprise software named “BSK Management System” that assists enterprise software clients with information resource management and perform complex organizational tasks. Waterfall model phases have been applied to ensure that all functions, user requirements, strategic goals, and objectives are met. In addition, Rich Picture, Structured English, and Data Dictionary have been implemented and investigated properly in engineering manner. Furthermore, an assessment survey with 20 participants has been conducted to investigate the usability and performance of the proposed software. The survey results indicated that our system featured simple interfaces, easy operation and maintenance, quick processing, and reliable and accurate transactions.

Keywords: End-user Application Development, Enterprise Software Design, Information Resource Management, Usability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1903
267 A Corpus-Based Approach to Understanding Market Access in Fisheries and Aquaculture: A Systematic Literature Review

Authors: Cheryl Marie Cordeiro

Abstract:

Although fisheries and aquaculture studies might seem marginal to international business (IB) studies in general, fisheries and aquaculture IB (FAIB) management is currently facing increasing pressure to meet global demand and consumption for fish in the next coming decades. In part address to this challenge, the purpose of this systematic review of literature (SLR) study is to investigate the use of the term ‘market access’ in its context of use in the generic literature and business sector discourse, in comparison to the more specific literature and discourse in fisheries, aquaculture and seafood. This SLR aims to uncover the knowledge/interest gaps between the academic subject discourses and business sector practices. Corpus driven in methodology and using a triangulation method of three different text analysis software including AntConc, VOSviewer and Web of Science (WoS) analytics, the SLR results indicate a gap in conceptual knowledge and business practices in how ‘market access’ is conceived and used in the context of the pharmaceutical healthcare industry and FAIB research and practice. While it is acknowledged that the product orientation of different business sectors might differ, this SLR study works with the assumption that both business sectors are global in orientation. These business sectors are complex in their operations from product to market. This SLR suggests a conceptual model in understanding the challenges, the potential barriers as well as avenues for solutions to developing market access for FAIB.

Keywords: Market access, fisheries and aquaculture, international business, systematic literature review.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 878
266 Comparison of Regime Transition between Ellipsoidal and Spherical Particle Assemblies in a Model Shear Cell

Authors: M. Hossain, H. P. Zhu, A. B. Yu

Abstract:

This paper presents a numerical investigation of regime transition of flow of ellipsoidal particles and a comparison with that of spherical particle assembly. Particle assemblies constituting spherical and ellipsoidal particle of 2.5:1 aspect ratio are examined at separate instances in similar flow conditions in a shear cell model that is numerically developed based on the discrete element method. Correlations among elastically scaled stress, kinetically scaled stress, coordination number and volume fraction are investigated, and show important similarities and differences for the spherical and ellipsoidal particle assemblies. In particular, volume fractions at points of regime transition are identified for both types of particles. It is found that compared with spherical particle assembly, ellipsoidal particle assembly has higher volume fraction for the quasistatic to intermediate regime transition and lower volume fraction for the intermediate to inertial regime transition. Finally, the relationship between coordination number and volume fraction shows strikingly distinct features for the two cases, suggesting that different from spherical particles, the effect of the shear rate on the coordination number is not significant for ellipsoidal particles. This work provides a glimpse of currently running work on one of the most attractive scopes of research in this field and has a wide prospect in understanding rheology of more complex shaped particles in light of the strong basis of simpler spherical particle rheology.

Keywords: Discrete element method, granular rheology, non-spherical particles, regime transition

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1457
265 A Medical Vulnerability Scoring System Incorporating Health and Data Sensitivity Metrics

Authors: Nadir A. Carreón, Christa Sonderer, Aakarsh Rao, Roman Lysecky

Abstract:

With the advent of complex software and increased connectivity, security of life-critical medical devices is becoming an increasing concern, particularly with their direct impact to human safety. Security is essential, but it is impossible to develop completely secure and impenetrable systems at design time. Therefore, it is important to assess the potential impact on security and safety of exploiting a vulnerability in such critical medical systems. The common vulnerability scoring system (CVSS) calculates the severity of exploitable vulnerabilities. However, for medical devices, it does not consider the unique challenges of impacts to human health and privacy. Thus, the scoring of a medical device on which a human life depends (e.g., pacemakers, insulin pumps) can score very low, while a system on which a human life does not depend (e.g., hospital archiving systems) might score very high. In this paper, we present a Medical Vulnerability Scoring System (MVSS) that extends CVSS to address the health and privacy concerns of medical devices. We propose incorporating two new parameters, namely health impact and sensitivity impact. Sensitivity refers to the type of information that can be stolen from the device, and health represents the impact to the safety of the patient if the vulnerability is exploited (e.g., potential harm, life threatening). We evaluate 15 different known vulnerabilities in medical devices and compare MVSS against two state-of-the-art medical device-oriented vulnerability scoring system and the foundational CVSS.

Keywords: Common vulnerability system, medical devices, medical device security, vulnerabilities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 664
264 Elegant: An Intuitive Software Tool for Interactive Learning of Power System Analysis

Authors: Eduardo N. Velloso, Fernando M. N. Dantas, Luciano S. Barros

Abstract:

A common complaint from power system analysis students lies in the overly complex tools they need to learn and use just to simulate very basic systems or just to check the answers to power system calculations. The most basic power system studies are power-flow solutions and short-circuit calculations. This paper presents a simple tool with an intuitive interface to perform both these studies and assess its performance in comparison with existent commercial solutions. With this in mind, Elegant is a pure Python software tool for learning power system analysis developed for undergraduate and graduate students. It solves the power-flow problem by iterative numerical methods and calculates bolted short-circuit fault currents by modeling the network in the domain of symmetrical components. Elegant can be used with a user-friendly Graphical User Interface (GUI) and automatically generates human-readable reports of the simulation results. The tool is exemplified using a typical Brazilian regional system with 18 buses. This study performs a comparative experiment with 1 undergraduate and 4 graduate students who attempted the same problem using both Elegant and a commercial tool. It was found that Elegant significantly reduces the time and labor involved in basic power system simulations while still providing some insights into real power system designs.

Keywords: Free- and open-source software, power-flow, power system analysis, Python, short-circuit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 385
263 Evaluation of Non-Staggered Body-Fitted Grid Based Solution Method in Application to Supercritical Fluid Flows

Authors: Suresh Sahu, Abhijeet M. Vaidya, Naresh K. Maheshwari

Abstract:

The efforts to understand the heat transfer behavior of supercritical water in supercritical water cooled reactor (SCWR) are ongoing worldwide to fulfill the future energy demand. The higher thermal efficiency of these reactors compared to a conventional nuclear reactor is one of the driving forces for attracting the attention of nuclear scientists. In this work, a solution procedure has been described for solving supercritical fluid flow problems in complex geometries. The solution procedure is based on non-staggered grid. All governing equations are discretized by finite volume method (FVM) in curvilinear coordinate system. Convective terms are discretized by first-order upwind scheme and central difference approximation has been used to discretize the diffusive parts. k-ε turbulence model with standard wall function has been employed. SIMPLE solution procedure has been implemented for the curvilinear coordinate system. Based on this solution method, 3-D Computational Fluid Dynamics (CFD) code has been developed. In order to demonstrate the capability of this CFD code in supercritical fluid flows, heat transfer to supercritical water in circular tubes has been considered as a test problem. Results obtained by code have been compared with experimental results reported in literature.

Keywords: Curvilinear coordinate, body-fitted mesh, momentum interpolation, non-staggered grid, supercritical fluids.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 613
262 Topographical Image Transference Compatibility Generated Through Moiré Technique Applying Parametrical Softwares of Computer Assisted Design

Authors: M. V. G. Silva, J. Gazzola, I. M. Dal Fabbro, A. C. L. Lino

Abstract:

Computer aided design accounts with the support of parametric software in the design of machine components as well as of any other pieces of interest. The complexities of the element under study sometimes offer certain difficulties to computer design, or ever might generate mistakes in the final body conception. Reverse engineering techniques are based on the transformation of already conceived body images into a matrix of points which can be visualized by the design software. The literature exhibits several techniques to obtain machine components dimensional fields, as contact instrument (MMC), calipers and optical methods as laser scanner, holograms as well as moiré methods. The objective of this research work was to analyze the moiré technique as instrument of reverse engineering, applied to bodies of nom complex geometry as simple solid figures, creating matrices of points. These matrices were forwarded to a parametric software named SolidWorks to generate the virtual object. Volume data obtained by mechanical means, i.e., by caliper, the volume obtained through the moiré method and the volume generated by the SolidWorks software were compared and found to be in close agreement. This research work suggests the application of phase shifting moiré methods as instrument of reverse engineering, serving also to support farm machinery element designs.

Keywords: Reverse engineering, Moiré technique, three dimensional image generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3416
261 Analysis and Research of Two-Level Scheduling Profile for Open Real-Time System

Authors: Yongxian Jin, Jingzhou Huang

Abstract:

In an open real-time system environment, the coexistence of different kinds of real-time and non real-time applications makes the system scheduling mechanism face new requirements and challenges. One two-level scheduling scheme of the open real-time systems is introduced, and points out that hard and soft real-time applications are scheduled non-distinctively as the same type real-time applications, the Quality of Service (QoS) cannot be guaranteed. It has two flaws: The first, it can not differentiate scheduling priorities of hard and soft real-time applications, that is to say, it neglects characteristic differences between hard real-time applications and soft ones, so it does not suit a more complex real-time environment. The second, the worst case execution time of soft real-time applications cannot be predicted exactly, so it is not worth while to cost much spending in order to assure all soft real-time applications not to miss their deadlines, and doing that may cause resource wasting. In order to solve this problem, a novel two-level real-time scheduling mechanism (including scheduling profile and scheduling algorithm) which adds the process of dealing with soft real-time applications is proposed. Finally, we verify real-time scheduling mechanism from two aspects of theory and experiment. The results indicate that our scheduling mechanism can achieve the following objectives. (1) It can reflect the difference of priority when scheduling hard and soft real-time applications. (2) It can ensure schedulability of hard real-time applications, that is, their rate of missing deadline is 0. (3) The overall rate of missing deadline of soft real-time applications can be less than 1. (4) The deadline of a non-real-time application is not set, whereas the scheduling algorithm that server 0 S uses can avoid the “starvation" of jobs and increase QOS. By doing that, our scheduling mechanism is more compatible with different types of applications and it will be applied more widely.

Keywords: Hard real-time, two-level scheduling profile, open real-time system, non-distinctive schedule, soft real-time

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1521
260 The Development of an Integrity Cultivating Module in School-Based Assessment among Malaysian Teachers: A Research Methodology

Authors: Eftah Bte. Moh Hj Abdullah, Abd Aziz Bin Abd Shukor, Norazilawati Binti Abdullah, Rahimah Adam, Othman Bin Lebar

Abstract:

The competency and integrity required for better understanding and practice of School-based Assessment (PBS) comes not only from the process, but also in providing the support or ‘scaffolding’ for teachers to recognize the student as a learner, improve their self-assessment skills, understanding of the daily teaching plan and its constructive alignment of the curriculum, pedagogy and assessment. The cultivation of integrity in PBS among the teachers is geared towards encouraging them to become committed and dedicated in implementing assessments in a serious, efficient manner, thus moving away from the usual teacher-focused approach to the student-focused approach. The teachers show their integrity via their professional commitment, responsibility and actions. The module based on the cultivation of integrity in PBS among Malaysian teachers aims to broaden the guidance support for teachers (embedded in the training), which consists of various domains to enable better evaluation of complex assessment tasks and the construction of suitable instrument for measuring the relevant cognitive, affective and psychomotor domains to describe the students’ achievement. The instrument for integrity cultivation in PBS has been developed and validated for measuring the effectiveness of the module constructed. This module is targeted towards assisting the staff in the Education Ministry, especially the principal trainers, teachers, headmasters and education officers to acquire effective intervention for improving the PBS assessors’ integrity and competency.

Keywords: School-based assessment, Assessment competency Integrity cultivation, Professional commitment, Module.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
259 Removal of Cationic Heavy Metal and HOC from Soil-Washed Water Using Activated Carbon

Authors: Chi Kyu Ahn, Young Mi Kim, Seung Han Woo, Jong Moon Park

Abstract:

Soil washing process with a surfactant solution is a potential technology for the rapid removal of hydrophobic organic compound (HOC) from soil. However, large amount of washed water would be produced during operation and this should be treated effectively by proper methods. The soil washed water for complex contaminated site with HOC and heavy metals might contain high amount of pollutants such as HOC and heavy metals as well as used surfactant. The heavy metals in the soil washed water have toxic effects on microbial activities thus these should be removed from the washed water before proceeding to a biological waste-water treatment system. Moreover, the used surfactant solutions are necessary to be recovered for reducing the soil washing operation cost. In order to simultaneously remove the heavy metals and HOC from soil-washed water, activated carbon (AC) was used in the present study. In an anionic-nonionic surfactant mixed solution, the Cd(II) and phenanthrene (PHE) were effectively removed by adsorption on activated carbon. The removal efficiency for Cd(II) was increased from 0.027 mmol-Cd/g-AC to 0.142 mmol-Cd/g-AC as the mole ratio of SDS increased in the presence of PHE. The adsorptive capacity of PHE was also increased according to the SDS mole ratio due to the decrement of molar solubilization ratios (MSR) for PHE in an anionic-nonionic surfactant mixture. The simultaneous adsorption of HOC and cationic heavy metals using activated carbon could be a useful method for surfactant recovery and the reduction of heavy metal toxicity in a surfactant-enhanced soil washing process.

Keywords: Activated carbon, Anionic-nonionic surfactant mixture, Cationic heavy metal, HOC, Soil washing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
258 Structural Analysis and Strengthening of the National Youth Foundation Building in Igoumenitsa, Greece

Authors: Chrysanthos Maraveas, Argiris Plesias, Garyfalia G. Triantafyllou, Konstantinos Petronikolos

Abstract:

The current paper presents a structural assessment and proposals for retrofit of the National Youth Foundation Building, an existing reinforced concrete (RC) building in the city of Igoumenitsa, Greece. The building is scheduled to be renovated in order to create a Municipal Cultural Center. The bearing capacity and structural integrity have been investigated in relation to the provisions and requirements of the Greek Retrofitting Code (KAN.EPE.) and European Standards (Eurocodes). The capacity of the existing concrete structure that makes up the two central buildings in the complex (buildings II and IV) has been evaluated both in its present form and after including several proposed architectural interventions. The structural system consists of spatial frames of columns and beams that have been simulated using beam elements. Some RC elements of the buildings have been strengthened in the past by means of concrete jacketing and have had cracks sealed with epoxy injections. Static-nonlinear analysis (Pushover) has been used to assess the seismic performance of the two structures with regard to performance level B1 from KAN.EPE. Retrofitting scenarios are proposed for the two buildings, including type Λ steel bracings and placement of concrete shear walls in the transverse direction in order to achieve the design-specification deformation in each applicable situation, improve the seismic performance, and reduce the number of interventions required.

Keywords: Earthquake resistance, pushover analysis, reinforced concrete, retrofit, strengthening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 805
257 Vehicle Gearbox Fault Diagnosis Based On Cepstrum Analysis

Authors: Mohamed El Morsy, Gabriela Achtenová

Abstract:

Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs.This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves asthe internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order Cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of Cepstrum analysis in detection and diagnosis of the gear condition.

Keywords: Cepstrum analysis, fault diagnosis, gearbox.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3271