Search results for: SIMPLE
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1327

Search results for: SIMPLE

247 Article 5 (3) of the Brussels I Regulation and Its Applicability in the Case of Intellectual Property Rights Infringement on the Internet

Authors: Nataliya Hitsevich

Abstract:

Article 5(3) of the Brussels I Regulation provides that a person domiciled in a Member State may be sued in another Member State in matters relating to tort, delict or quasi-delict, in the courts for the place where the harmful events occurred or may occur. For a number of years Article 5 (3) of the Brussels I Regulation has been at the centre of the debate regarding the intellectual property rights infringement over the Internet. Nothing has been done to adapt the provisions relating to non-internet cases of infringement of intellectual property rights to the context of the Internet. The author’s findings indicate that in the case of intellectual property rights infringement on the Internet, the plaintiff has the option to sue either: the court of the Member State of the event giving rise to the damage: where the publisher of the newspaper is established; the court of the Member State where the damage occurred: where defamatory article is distributed. However, it must be admitted that whilst infringement over the Internet has some similarity to multi-State defamation by means of newspapers, the position is not entirely analogous due to the cross-border nature of the Internet. A simple example which may appropriately illustrate its contentious nature is a defamatory statement published on a website accessible in different Member States, and available in different languages. Therefore, we need to answer the question: how these traditional jurisdictional rules apply in the case of intellectual property rights infringement over the Internet? Should these traditional jurisdictional rules be modified?

Keywords: Intellectual property rights, infringement, Internet, jurisdiction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4477
246 Effect of Transmission Codes on Hybrid SC/MRC Diversity Reception MQAM system over Rayleigh Fading Channels

Authors: J.S. Ubhi, M.S. Patterh, T.S. Kamal

Abstract:

In this paper, the effect of transmission codes on the performance of coherent square M-ary quadrature amplitude modulation (CSMQAM) under hybrid selection/maximal-ratio combining (H-S/MRC) diversity is analysed. The fading channels are modeled as frequency non-selective slow independent and identically distributed Rayleigh fading channels corrupted by additive white Gaussian noise (AWGN). The results for coded MQAM are computed numerically for the case of (24,12) extended Golay code and compared with uncoded MQAM under H-S/MRC diversity by plotting error probabilities versus average signal to noise ratio (SNR) for various values L and N in order to examine the improvement in the performance of the digital communications system as the number of selected diversity branches is increased. The results for no diversity, conventional SC and Lth order MRC schemes are also plotted for comparison. Closed form analytical results derived in this paper are sufficiently simple and therefore can be computed numerically without any approximations. The analytical results presented in this paper are expected to provide useful information needed for design and analysis of digital communication systems over wireless fading channels.

Keywords: Error probability, diversity reception, Rayleigh fading channels, wireless digital communications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692
245 Double Reduction of Ada-ECATNet Representation using Rewriting Logic

Authors: Noura Boudiaf, Allaoua Chaoui

Abstract:

One major difficulty that faces developers of concurrent and distributed software is analysis for concurrency based faults like deadlocks. Petri nets are used extensively in the verification of correctness of concurrent programs. ECATNets [2] are a category of algebraic Petri nets based on a sound combination of algebraic abstract types and high-level Petri nets. ECATNets have 'sound' and 'complete' semantics because of their integration in rewriting logic [12] and its programming language Maude [13]. Rewriting logic is considered as one of very powerful logics in terms of description, verification and programming of concurrent systems. We proposed in [4] a method for translating Ada-95 tasking programs to ECATNets formalism (Ada-ECATNet). In this paper, we show that ECATNets formalism provides a more compact translation for Ada programs compared to the other approaches based on simple Petri nets or Colored Petri nets (CPNs). Such translation doesn-t reduce only the size of program, but reduces also the number of program states. We show also, how this compact Ada-ECATNet may be reduced again by applying reduction rules on it. This double reduction of Ada-ECATNet permits a considerable minimization of the memory space and run time of corresponding Maude program.

Keywords: Ada tasking, ECATNets, Algebraic Petri Nets, Compact Representation, Analysis, Rewriting Logic, Maude.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369
244 Budget and the Performance of Public Enterprises: A Study of Selected Public Enterprises in Nasarawa State Nigeria (2009-2013)

Authors: Dalhatu, Musa Yusha’u, Shuaibu Sidi Safiyanu, Haliru Musa Hussaini

Abstract:

This study examined budget and performance of public enterprises in Nasarawa State, Nigeria in a period of 2009-2013. The study utilized secondary sources of data obtained from four selected parastatals’ budget allocation and revenue generation for the period under review. The simple correlation coefficient was used to analyze the extent of the relationship between budget allocation and revenue generation of the parastatals. Findings revealed varying results. There was positive (0.21) and weak correlation between expenditure and revenue of Nasarawa Investment and Property Development Company (NIPDC). However, the study further revealed that there was strong and weak negative relationship in the revenue and expenditure of the following parastatals over the period under review. Viz: Nasarawa State Water Board, -0.27 (weak), Nasarawa State Broadcasting Service, -0.52 (Strong) and Nasarawa State College of Agriculture, -0.36 (weak). The study therefore, recommends that government should increase its investments in NIPDC to enhance efficiency and profitability. It also recommends that government should strengthen its fiscal responsibility, accountability and transparency in public parastatals.

Keywords: Allocation, Budget, Public Enterprises, Parastatals, Performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 939
243 Addressing Scalability Issues of Named Entity Recognition Using Multi-Class Support Vector Machines

Authors: Mona Soliman Habib

Abstract:

This paper explores the scalability issues associated with solving the Named Entity Recognition (NER) problem using Support Vector Machines (SVM) and high-dimensional features. The performance results of a set of experiments conducted using binary and multi-class SVM with increasing training data sizes are examined. The NER domain chosen for these experiments is the biomedical publications domain, especially selected due to its importance and inherent challenges. A simple machine learning approach is used that eliminates prior language knowledge such as part-of-speech or noun phrase tagging thereby allowing for its applicability across languages. No domain-specific knowledge is included. The accuracy measures achieved are comparable to those obtained using more complex approaches, which constitutes a motivation to investigate ways to improve the scalability of multiclass SVM in order to make the solution more practical and useable. Improving training time of multi-class SVM would make support vector machines a more viable and practical machine learning solution for real-world problems with large datasets. An initial prototype results in great improvement of the training time at the expense of memory requirements.

Keywords: Named entity recognition, support vector machines, language independence, bioinformatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645
242 PSO Based Weight Selection and Fixed Structure Robust Loop Shaping Control for Pneumatic Servo System with 2DOF Controller

Authors: Randeep Kaur, Jyoti Ohri

Abstract:

This paper proposes a new technique to design a fixed-structure robust loop shaping controller for the pneumatic servosystem. In this paper, a new method based on a particle swarm optimization (PSO) algorithm for tuning the weighting function parameters to design an H∞ controller is presented. The PSO algorithm is used to minimize the infinity norm of the transfer function of the nominal closed loop system to obtain the optimal parameters of the weighting functions. The optimal stability margin is used as an objective in PSO for selecting the optimal weighting parameters; it is shown that the proposed method can simplify the design procedure of H∞ control to obtain optimal robust controller for pneumatic servosystem. In addition, the order of the proposed controller is much lower than that of the conventional robust loop shaping controller, making it easy to implement in practical works. Also two-degree-of-freedom (2DOF) control design procedure is proposed to improve tracking performance in the face of noise and disturbance. Result of simulations demonstrates the advantages of the proposed controller in terms of simple structure and robustness against plant perturbations and disturbances.

Keywords: Robust control, Pneumatic Servosystem, PSO, H∞ control, 2DOF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2376
241 Turbulent Mixing and its Effects on Thermal Fatigue in Nuclear Reactors

Authors: Eggertson, E.C. Kapulla, R, Fokken, J, Prasser, H.M.

Abstract:

The turbulent mixing of coolant streams of different temperature and density can cause severe temperature fluctuations in piping systems in nuclear reactors. In certain periodic contraction cycles these conditions lead to thermal fatigue. The resulting aging effect prompts investigation in how the mixing of flows over a sharp temperature/density interface evolves. To study the fundamental turbulent mixing phenomena in the presence of density gradients, isokinetic (shear-free) mixing experiments are performed in a square channel with Reynolds numbers ranging from 2-500 to 60-000. Sucrose is used to create the density difference. A Wire Mesh Sensor (WMS) is used to determine the concentration map of the flow in the cross section. The mean interface width as a function of velocity, density difference and distance from the mixing point are analyzed based on traditional methods chosen for the purposes of atmospheric/oceanic stratification analyses. A definition of the mixing layer thickness more appropriate to thermal fatigue and based on mixedness is devised. This definition shows that the thermal fatigue risk assessed using simple mixing layer growth can be misleading and why an approach that separates the effects of large scale (turbulent) and small scale (molecular) mixing is necessary.

Keywords: Concentration measurements, Mixedness, Stablystratified turbulent isokinetic mixing layer, Wire mesh sensor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2188
240 Numerical Simulations on Feasibility of Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization

Authors: Taiki Baba, Tomoaki Hashimoto

Abstract:

The random dither quantization method enables us to achieve much better performance than the simple uniform quantization method for the design of quantized control systems. Motivated by this fact, the stochastic model predictive control method in which a performance index is minimized subject to probabilistic constraints imposed on the state variables of systems has been proposed for linear feedback control systems with random dither quantization. In other words, a method for solving optimal control problems subject to probabilistic state constraints for linear discrete-time control systems with random dither quantization has been already established. To our best knowledge, however, the feasibility of such a kind of optimal control problems has not yet been studied. Our objective in this paper is to investigate the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization. To this end, we provide the results of numerical simulations that verify the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization.

Keywords: Model predictive control, stochastic systems, probabilistic constraints, random dither quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 971
239 Method for Tuning Level Control Loops Based on Internal Model Control and Closed Loop Step Test Data

Authors: Arnaud Nougues

Abstract:

This paper describes a two-stage methodology derived from IMC (Internal Model Control) for tuning a PID (Proportional-Integral-Derivative) controller for levels or other integrating processes in an industrial environment. Focus is ease of use and implementation speed which are critical for an industrial application. Tuning can be done with minimum effort and without the need of time-consuming open-loop step tests on the plant. The first stage of the method applies to levels only: the vessel residence time is calculated from equipment dimensions and used to derive a set of preliminary PI (Proportional-Integral) settings with IMC. The second stage, re-tuning in closed-loop, applies to levels as well as other integrating processes: a tuning correction mechanism has been developed based on a series of closed-loop simulations with model errors. The tuning correction is done from a simple closed-loop step test and application of a generic correlation between observed overshoot and integral time correction. A spin-off of the method is that an estimate of the vessel residence time (levels) or open-loop process gain (other integrating process) is obtained from the closed-loop data.

Keywords: closed-loop model identification, IMC-PID tuning method, integrating process control, on-line PID tuning adaptation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 505
238 Supply Chain Resilience Triangle: The Study and Development of a Framework

Authors: M. Bevilacqua, F. E. Ciarapica, G. Marcucci

Abstract:

Supply Chain Resilience has been broadly studied during the last decade, focusing the research on many aspects of Supply Chain performance. Consequently, different definitions of Supply Chain Resilience have been developed by the research community, drawing inspiration also from other fields of study such as ecology, sociology, psychology, economy et al. This way, the definitions so far developed in the extant literature are therefore very heterogeneous, and many authors have pointed out a lack of consensus in this field of analysis. The aim of this research is to find common points between these definitions, through the development of a framework of study: the Resilience Triangle. The Resilience Triangle is a tool developed in the field of civil engineering, with the objective of modeling the loss of resilience of a given structure during and after the occurrence of a disruption such as an earthquake. The Resilience Triangle is a simple yet powerful tool: in our opinion, it can summarize all the features that authors have captured in the Supply Chain Resilience definitions over the years. This research intends to recapitulate within this framework all these heterogeneities in Supply Chain Resilience research. After collecting a various number of Supply Chain Resilience definitions present in the extant literature, the methodology approach provides a taxonomy step with the scope of collecting and analyzing all the data gathered. The next step provides the comparison of the data obtained with the plotting of a disruption profile, in order to contextualize the Resilience Triangle in the Supply Chain context. The tool and the results developed in this research will allow to lay the foundation for future Supply Chain Resilience modeling and measurement work.

Keywords: Supply chain resilience, resilience definition, supply chain resilience triangle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2602
237 Practical Design Procedures of 3D Reinforced Concrete Shear Wall-Frame Structure Based on Structural Optimization Method

Authors: H. Nikzad, S. Yoshitomi

Abstract:

This study investigates and develops the structural optimization method. The effect of size constraints on practical solution of reinforced concrete (RC) building structure with shear wall is proposed. Cross-sections of beam and column, and thickness of shear wall are considered as design variables. The objective function to be minimized is total cost of the structure by using a simple and efficient automated MATLAB platform structural optimization methodology. With modification of mathematical formulations, the result is compared with optimal solution without size constraints. The most suitable combination of section sizes is selected as for the final design application based on linear static analysis. The findings of this study show that defining higher value of upper bound of sectional sizes significantly affects optimal solution, and defining of size constraints play a vital role in finding of global and practical solution during optimization procedures. The result and effectiveness of proposed method confirm the ability and efficiency of optimal solutions for 3D RC shear wall-frame structure.

Keywords: Structural optimization, linear static analysis, ETABS, MATLAB, RC shear wall-frame structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1226
236 Using Satellite Images Datasets for Road Intersection Detection in Route Planning

Authors: Fatma El-zahraa El-taher, Ayman Taha, Jane Courtney, Susan Mckeever

Abstract:

Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions is critical to decisions such as crossing roads or selecting safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition  problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset are examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of detection of intersections in satellite images is evaluated.

Keywords: Satellite images, remote sensing images, data acquisition, autonomous vehicles, robot navigation, route planning, road intersections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 634
235 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 682
234 Numerical Optimization within Vector of Parameters Estimation in Volatility Models

Authors: J. Arneric, A. Rozga

Abstract:

In this paper usefulness of quasi-Newton iteration procedure in parameters estimation of the conditional variance equation within BHHH algorithm is presented. Analytical solution of maximization of the likelihood function using first and second derivatives is too complex when the variance is time-varying. The advantage of BHHH algorithm in comparison to the other optimization algorithms is that requires no third derivatives with assured convergence. To simplify optimization procedure BHHH algorithm uses the approximation of the matrix of second derivatives according to information identity. However, parameters estimation in a/symmetric GARCH(1,1) model assuming normal distribution of returns is not that simple, i.e. it is difficult to solve it analytically. Maximum of the likelihood function can be founded by iteration procedure until no further increase can be found. Because the solutions of the numerical optimization are very sensitive to the initial values, GARCH(1,1) model starting parameters are defined. The number of iterations can be reduced using starting values close to the global maximum. Optimization procedure will be illustrated in framework of modeling volatility on daily basis of the most liquid stocks on Croatian capital market: Podravka stocks (food industry), Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla stocks (information-s-communications industry).

Keywords: Heteroscedasticity, Log-likelihood Maximization, Quasi-Newton iteration procedure, Volatility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2604
233 Motion Analysis for Duplicate Frame Removal in Wireless Capsule Endoscope Video

Authors: Min Kook Choi, Hyun Gyu Lee, Ryan You, Byeong-Seok Shin, Sang-Chul Lee

Abstract:

Wireless capsule Endoscopy (WCE) has rapidly shown its wide applications in medical domain last ten years thanks to its noninvasiveness for patients and support for thorough inspection through a patient-s entire digestive system including small intestine. However, one of the main barriers to efficient clinical inspection procedure is that it requires large amount of effort for clinicians to inspect huge data collected during the examination, i.e., over 55,000 frames in video. In this paper, we propose a method to compute meaningful motion changes of WCE by analyzing the obtained video frames based on regional optical flow estimations. The computed motion vectors are used to remove duplicate video frames caused by WCE-s imaging nature, such as repetitive forward-backward motions from peristaltic movements. The motion vectors are derived by calculating directional component vectors in four local regions. Our experiments are performed on small intestine area, which is of main interest to clinical experts when using WCEs, and our experimental results show significant frame reductions comparing with a simple frame-to-frame similarity-based image reduction method.

Keywords: Wireless capsule endoscopy, optical flow, duplicated image, duplicated frame.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
232 Applying Element Free Galerkin Method on Beam and Plate

Authors: Mahdad M’hamed, Belaidi Idir

Abstract:

This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate hole

Keywords: Numerical computation, element-free Galerkin, moving least squares, meshless methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2380
231 High Order Accurate Runge Kutta Nodal Discontinuous Galerkin Method for Numerical Solution of Linear Convection Equation

Authors: Faheem Ahmed, Fareed Ahmed, Yongheng Guo, Yong Yang

Abstract:

This paper deals with a high-order accurate Runge Kutta Discontinuous Galerkin (RKDG) method for the numerical solution of the wave equation, which is one of the simple case of a linear hyperbolic partial differential equation. Nodal DG method is used for a finite element space discretization in 'x' by discontinuous approximations. This method combines mainly two key ideas which are based on the finite volume and finite element methods. The physics of wave propagation being accounted for by means of Riemann problems and accuracy is obtained by means of high-order polynomial approximations within the elements. High order accurate Low Storage Explicit Runge Kutta (LSERK) method is used for temporal discretization in 't' that allows the method to be nonlinearly stable regardless of its accuracy. The resulting RKDG methods are stable and high-order accurate. The L1 ,L2 and L∞ error norm analysis shows that the scheme is highly accurate and effective. Hence, the method is well suited to achieve high order accurate solution for the scalar wave equation and other hyperbolic equations.

Keywords: Nodal Discontinuous Galerkin Method, RKDG, Scalar Wave Equation, LSERK

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2435
230 Forms of Social Quality Mobilization in Suburban Communities of a Changing World

Authors: Supannee Chaiumporn

Abstract:

This article is to introduce the meaning and form of social quality moving process as indicated by members of two suburb communities with different social and cultural contexts. The form of social quality moving process is very significant for the community and social development, because it will make the people living together with sustainable happiness. This is a qualitative study involving 30 key-informants from two suburb communities. Data were collected though key-informant interviews, and analyzed using logical content description and descriptive statistics. This research found that on the social quality component, the people in both communities stressed the procedure for social qualitymaking. This includes the generousness, sharing and assisting among people in the communities. These practices helped making people to live together with sustainable happiness. Living as a family or appear to be a family is the major social characteristic of these two communities. This research also found that form of social quality’s moving process of both communities stress relation of human and nature; “nature overpower humans” paradigm and influence of religious doctrine that emphasizes relations among humans. Both criteria make the form of social’s moving process simple, adaptive to nature and caring for opinion sharing and understanding among each other before action. This form of social quality’s moving process is composed of 4 steps; (1) awareness building, (2) motivation to change, (3) participation from every party which is concerned (4) self-reliance.

Keywords: Social quality, form of social quality moving process, happiness, different social and cultural context.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552
229 Can Exams Be Shortened? Using a New Empirical Approach to Test in Finance Courses

Authors: Eric S. Lee, Connie Bygrave, Jordan Mahar, Naina Garg, Suzanne Cottreau

Abstract:

Marking exams is universally detested by lecturers. Final exams in many higher education courses often last 3.0 hrs. Do exams really need to be so long? Can we justifiably reduce the number of questions on them? Surprisingly few have researched these questions, arguably because of the complexity and difficulty of using traditional methods. To answer these questions empirically, we used a new approach based on three key elements: Use of an unusual variation of a true experimental design, equivalence hypothesis testing, and an expanded set of six psychometric criteria to be met by any shortened exam if it is to replace a current 3.0-hr exam (reliability, validity, justifiability, number of exam questions, correspondence, and equivalence). We compared student performance on each official 3.0-hr exam with that on five shortened exams having proportionately fewer questions (2.5, 2.0, 1.5, 1.0, and 0.5 hours) in a series of four experiments conducted in two classes in each of two finance courses (224 students in total). We found strong evidence that, in these courses, shortening of final exams to 2.0 hrs was warranted on all six psychometric criteria. Shortening these exams by one hour should result in a substantial one-third reduction in lecturer time and effort spent marking, lower student stress, and more time for students to prepare for other exams. Our approach provides a relatively simple, easy-to-use methodology that lecturers can use to examine the effect of shortening their own exams.

Keywords: Exam length, psychometric criteria, synthetic experimental designs, test length.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1461
228 Low Value Capacitance Measurement System with Adjustable Lead Capacitance Compensation

Authors: Gautam Sarkar, Anjan Rakshit, Amitava Chatterjee, Kesab Bhattacharya

Abstract:

The present paper describes the development of a low cost, highly accurate low capacitance measurement system that can be used over a range of 0 – 400 pF with a resolution of 1 pF. The range of capacitance may be easily altered by a simple resistance or capacitance variation of the measurement circuit. This capacitance measurement system uses quad two-input NAND Schmitt trigger circuit CD4093B with hysteresis for the measurement and this system is integrated with PIC 18F2550 microcontroller for data acquisition purpose. The microcontroller interacts with software developed in the PC end through USB architecture and an attractive graphical user interface (GUI) based system is developed in the PC end to provide the user with real time, online display of capacitance under measurement. The system uses a differential mode of capacitance measurement, with reference to a trimmer capacitance, that effectively compensates lead capacitances, a notorious error encountered in usual low capacitance measurements. The hysteresis provided in the Schmitt-trigger circuits enable reliable operation of the system by greatly minimizing the possibility of false triggering because of stray interferences, usually regarded as another source of significant error. The real life testing of the proposed system showed that our measurements could produce highly accurate capacitance measurements, when compared to cutting edge, high end digital capacitance meters.

Keywords: Capacitance measurement, NAND Schmitt trigger, microcontroller, GUI, lead compensation, hysteresis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7329
227 LOD Exploitation and Fast Silhouette Detection for Shadow Volumes

Authors: Mustafa S. Fawad, Wang Wencheng, Wu Enhua

Abstract:

Shadows add great amount of realism to a scene and many algorithms exists to generate shadows. Recently, Shadow volumes (SVs) have made great achievements to place a valuable position in the gaming industries. Looking at this, we concentrate on simple but valuable initial partial steps for further optimization in SV generation, i.e.; model simplification and silhouette edge detection and tracking. Shadow volumes (SVs) usually takes time in generating boundary silhouettes of the object and if the object is complex then the generation of edges become much harder and slower in process. The challenge gets stiffer when real time shadow generation and rendering is demanded. We investigated a way to use the real time silhouette edge detection method, which takes the advantage of spatial and temporal coherence, and exploit the level-of-details (LOD) technique for reducing silhouette edges of the model to use the simplified version of the model for shadow generation speeding up the running time. These steps highly reduce the execution time of shadow volume generations in real-time and are easily flexible to any of the recently proposed SV techniques. Our main focus is to exploit the LOD and silhouette edge detection technique, adopting them to further enhance the shadow volume generations for real time rendering.

Keywords: LOD, perception, Shadow Volumes, SilhouetteEdge, Spatial and Temporal coherence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1571
226 Alcohols as a Phase Change Material with Excellent Thermal Storage Properties in Buildings

Authors: Dehong Li, Yuchen Chen, Alireza Kaboorani, Denis Rodrigue, Xiaodong (Alice) Wang

Abstract:

Utilizing solar energy for thermal energy storage has emerged as an appealing option for lowering the amount of energy that is consumed by buildings. Due to their high heat storage density, non-corrosive and non-polluting properties, alcohols can be a good alternative to petroleum-derived paraffin phase change materials (PCMs). In this paper, ternary eutectic PCMs with suitable phase change temperatures were designed and prepared using lauryl alcohol (LA), cetyl alcohol (CA), stearyl alcohol (SA) and xylitol (X). The Differential Scanning Calorimetry (DSC) results revealed that the phase change temperatures of LA-CA-SA, LA-CA-X, and LA-SA-X were 20.52 °C, 20.37 °C, and 22.18 °C, respectively. The latent heat of phase change of the ternary eutectic PCMs were all stronger than that of the paraffinic PCMs at roughly the same temperature. The highest latent heat was 195 J/g. It had good thermal energy storage capacity. The preparation mechanism was investigated using Fourier-transform Infrared Spectroscopy (FTIR), and it was found that the ternary eutectic PCMs were only physically mixed among the components. Ternary eutectic PCMs had a simple preparation process, suitable phase change temperature, and high energy storage density. They are suitable for low-temperature architectural packaging applications.

Keywords: Thermal energy storage, buildings, phase change materials, alcohols.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 94
225 Blind Source Separation for Convoluted Signals Based on Properties of Acoustic Transfer Function in Real Environments

Authors: Takaaki Ishibashi

Abstract:

Frequency domain independent component analysis has a scaling indeterminacy and a permutation problem. The scaling indeterminacy can be solved by use of a decomposed spectrum. For the permutation problem, we have proposed the rules in terms of gain ratio and phase difference derived from the decomposed spectra and the source-s coarse directions. The present paper experimentally clarifies that the gain ratio and the phase difference work effectively in a real environment but their performance depends on frequency bands, a microphone-space and a source-microphone distance. From these facts it is seen that it is difficult to attain a perfect solution for the permutation problem in a real environment only by either the gain ratio or the phase difference. For the perfect solution, this paper gives a solution to the problems in a real environment. The proposed method is simple, the amount of calculation is small. And the method has high correction performance without depending on the frequency bands and distances from source signals to microphones. Furthermore, it can be applied under the real environment. From several experiments in a real room, it clarifies that the proposed method has been verified.

Keywords: blind source separation, frequency domain independent component analysys, permutation correction, scale adjustment, target extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397
224 Robust Iterative PID Controller Based on Linear Matrix Inequality for a Sample Power System

Authors: Ahmed Bensenouci

Abstract:

This paper provides the design steps of a robust Linear Matrix Inequality (LMI) based iterative multivariable PID controller whose duty is to drive a sample power system that comprises a synchronous generator connected to a large network via a step-up transformer and a transmission line. The generator is equipped with two control-loops, namely, the speed/power (governor) and voltage (exciter). Both loops are lumped in one where the error in the terminal voltage and output active power represent the controller inputs and the generator-exciter voltage and governor-valve position represent its outputs. Multivariable PID is considered here because of its wide use in the industry, simple structure and easy implementation. It is also preferred in plants of higher order that cannot be reduced to lower ones. To improve its robustness to variation in the controlled variables, H∞-norm of the system transfer function is used. To show the effectiveness of the controller, divers tests, namely, step/tracking in the controlled variables, and variation in plant parameters, are applied. A comparative study between the proposed controller and a robust H∞ LMI-based output feedback is given by its robustness to disturbance rejection. From the simulation results, the iterative multivariable PID shows superiority.

Keywords: Linear matrix inequality, power system, robust iterative PID, robust output feedback control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008
223 Numerical Study on CO2 Pollution in an Ignition Chamber by Oxygen Enrichment

Authors: Zohreh Orshesh

Abstract:

In this study, a 3D combustion chamber was simulated using FLUENT 6.32. Aims to obtain accurate information about the profile of the combustion in the furnace and also check the effect of oxygen enrichment on the combustion process. Oxygen enrichment is an effective way to reduce combustion pollutant. The flow rate of air to fuel ratio is varied as 1.3, 3.2 and 5.1 and the oxygen enriched flow rates are 28, 54 and 68 lit/min. Combustion simulations typically involve the solution of the turbulent flows with heat transfer, species transport and chemical reactions. It is common to use the Reynolds-averaged form of the governing equation in conjunction with a suitable turbulence model. The 3D Reynolds Averaged Navier Stokes (RANS) equations with standard k-ε turbulence model are solved together by Fluent 6.3 software. First order upwind scheme is used to model governing equations and the SIMPLE algorithm is used as pressure velocity coupling. Species mass fractions at the wall are assumed to have zero normal gradients.Results show that minimum mole fraction of CO2 happens when the flow rate ratio of air to fuel is 5.1. Additionally, in a fixed oxygen enrichment condition, increasing the air to fuel ratio will increase the temperature peak. As a result, oxygen-enrichment can reduce the CO2 emission at this kind of furnace in high air to fuel rates.

Keywords: Combustion chamber, Oxygen enrichment, Reynolds Averaged Navier- Stokes, CO2 emission

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1497
222 Markov Game Controller Design Algorithms

Authors: Rajneesh Sharma, M. Gopal

Abstract:

Markov games are a generalization of Markov decision process to a multi-agent setting. Two-player zero-sum Markov game framework offers an effective platform for designing robust controllers. This paper presents two novel controller design algorithms that use ideas from game-theory literature to produce reliable controllers that are able to maintain performance in presence of noise and parameter variations. A more widely used approach for controller design is the H∞ optimal control, which suffers from high computational demand and at times, may be infeasible. Our approach generates an optimal control policy for the agent (controller) via a simple Linear Program enabling the controller to learn about the unknown environment. The controller is facing an unknown environment, and in our formulation this environment corresponds to the behavior rules of the noise modeled as the opponent. Proposed controller architectures attempt to improve controller reliability by a gradual mixing of algorithmic approaches drawn from the game theory literature and the Minimax-Q Markov game solution approach, in a reinforcement-learning framework. We test the proposed algorithms on a simulated Inverted Pendulum Swing-up task and compare its performance against standard Q learning.

Keywords: Reinforcement learning, Markov Decision Process, Matrix Games, Markov Games, Smooth Fictitious play, Controller, Inverted Pendulum.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
221 Geometrically Non-Linear Axisymmetric Free Vibrations of Thin Isotropic Annular Plates

Authors: Boutahar Lhoucine, El Bikri Khalid, Benamar Rhali

Abstract:

The effects of large vibration amplitudes on the first axisymetric mode shape of thin isotropic annular plates having both edges clamped are examined in this paper. The theoretical model based on Hamilton’s principle and spectral analysis by using a basis of Bessel’s functions is adapted اhere to the case of annular plates. The model effectively reduces the large amplitude free vibration problem to the solution of a set of non-linear algebraic equations.

The governing non-linear eigenvalue problem has been linearised in the neighborhood of each resonance and a new one-step iterative technique has been proposed as a simple alternative method of solution to determine the basic function contributions to the non-linear mode shape considered.

Numerical results are given for the first non-linear mode shape for a wide range of vibration amplitudes. For each value of the vibration amplitude considered, the corresponding contributions of the basic functions defining the non-linear transverse displacement function and the associated non-linear frequency, the membrane and bending stress distributions are given. By comparison with the iterative method of solution, it was found that the present procedure is efficient for a wide range of vibration amplitudes, up to at least 1.8 times the plate thickness,

Keywords: Non-linear vibrations, Annular plates, Large vibration amplitudes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2232
220 Research of Strong-Column-Weak-Beam Criteria of Reinforced Concrete Frames Subjected to Biaxial Seismic Excitation

Authors: Chong Zhang, Mu-Xuan Tao

Abstract:

In several earthquakes, numerous reinforced concrete (RC) frames subjected to seismic excitation demonstrated a collapse pattern characterized by column hinges, though designed according to the Strong-Column-Weak-Beam (S-C-W-B) criteria. The effect of biaxial seismic excitation on the disparity between design and actual performance is carefully investigated in this article. First, a modified load contour method is proposed to derive a closed-form equation of biaxial bending moment strength, which is verified by numerical and experimental tests. Afterwards, a group of time history analyses of a simple frame modeled by fiber beam-column elements subjected to biaxial seismic excitation are conducted to verify that the current S-C-W-B criteria are not adequate to prevent the occurrence of column hinges. A biaxial over-strength factor is developed based on the proposed equation, and the reinforcement of columns is appropriately amplified with this factor to prevent the occurrence of column hinges under biaxial excitation, which is proved to be effective by another group of time history analyses.

Keywords: Biaxial bending moment strength, biaxial seismic excitation, fiber beam-column model, load contour method, strong-column-weak-beam.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 567
219 Methodology: A Review in Modelling and Predictability of Embankment in Soft Ground

Authors: Bhim Kumar Dahal

Abstract:

Transportation network development in the developing country is in rapid pace. The majority of the network belongs to railway and expressway which passes through diverse topography, landform and geological conditions despite the avoidance principle during route selection. Construction of such networks demand many low to high embankment which required improvement in the foundation soil. This paper is mainly focused on the various advanced ground improvement techniques used to improve the soft soil, modelling approach and its predictability for embankments construction. The ground improvement techniques can be broadly classified in to three groups i.e. densification group, drainage and consolidation group and reinforcement group which are discussed with some case studies.  Various methods were used in modelling of the embankments from simple 1-dimensional to complex 3-dimensional model using variety of constitutive models. However, the reliability of the predictions is not found systematically improved with the level of sophistication.  And sometimes the predictions are deviated more than 60% to the monitored value besides using same level of erudition. This deviation is found mainly due to the selection of constitutive model, assumptions made during different stages, deviation in the selection of model parameters and simplification during physical modelling of the ground condition. This deviation can be reduced by using optimization process, optimization tools and sensitivity analysis of the model parameters which will guide to select the appropriate model parameters.

Keywords: Embankment, ground improvement, modelling, model prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 903
218 Studying the Environmental Effects of using Biogas Energy in Iran

Authors: Kambiz Tahvildari, Shakila ila Motamedi

Abstract:

Presently and in line with the United Nations (EPA), human thinking system has shifted towards clean fuels so as to maintain a cleaner environment and to save our planet earth. One of the most successful studies in order to achieve new energies includes the use of animal wastes and their organic residues, and the result of these researches has been represented in the form of very simple and cheap methods called biogas technology. Biogas technology has developed a lot in the recent decades; its reason is the high cost of fossil fuels and the greater attention of countries to the environmental pollutions due to the consumption of this kind of fuels. IRAN is ready for the optimized application of renewable energies, having much enriched resources of this kind of energies; so a special place could be considered for it when making programs. The purpose of biogas technology is the recovery of energy and finally the protection of the environment, which is much appropriate for the third world farmers with respect to their technical abilities and economic potentials. Studies show that the production and consumption of biogas is appropriate and economic in IRAN, because of the high amount of waste in the agriculture sector, the significant amount of animal and human excrement production, the great volume of garbage produced and the most important the specific social, climatic and agricultural conditions in IRAN, in order to proceed towards the reduction of pollution due to the use of fossil fuels.

Keywords: Agriculture, Biogas, Energy, Environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1741