Search results for: time complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18787

Search results for: time complexity

18487 Artificial Steady-State-Based Nonlinear MPC for Wheeled Mobile Robot

Authors: M. H. Korayem, Sh. Ameri, N. Yousefi Lademakhi

Abstract:

To ensure the stability of closed-loop nonlinear model predictive control (NMPC) within a finite horizon, there is a need for appropriate design terminal ingredients, which can be a time-consuming and challenging effort. Otherwise, in order to ensure the stability of the control system, it is necessary to consider an infinite predictive horizon. Increasing the prediction horizon increases computational demand and slows down the implementation of the method. In this study, a new technique has been proposed to ensure system stability without terminal ingredients. This technique has been employed in the design of the NMPC algorithm, leading to a reduction in the computational complexity of designing terminal ingredients and computational burden. The studied system is a wheeled mobile robot (WMR) subjected to non-holonomic constraints. Simulation has been investigated for two problems: trajectory tracking and adjustment mode.

Keywords: wheeled mobile robot, nonlinear model predictive control, stability, without terminal ingredients

Procedia PDF Downloads 55
18486 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 204
18485 Analysis of the Omnichannel Delivery Network with Application to Last Mile Delivery

Authors: Colette Malyack, Pius Egbelu

Abstract:

Business-to-Customer (B2C) delivery options have improved to meet increased demand in recent years. The change in end users has forced logistics networks to focus on customer service and sentiment that would have previously been the priority of the company or organization of origin. This has led to increased pressure on logistics companies to extend traditional B2B networks into a B2C solution while accommodating additional costs, roadblocks, and customer sentiment; the result has been the creation of the omnichannel delivery network encompassing a number of traditional and modern methods of package delivery. In this paper the many solutions within the omnichannel delivery network are defined and discussed. It can be seen through this analysis that the omnichannel delivery network can be applied to reduce the complexity of package delivery and provide customers with more options. Applied correctly the result is a reduction in cost to the logistics company over time, even with an initial increase in cost to obtain the technology.

Keywords: network planning, last mile delivery, omnichannel delivery network, omnichannel logistics

Procedia PDF Downloads 124
18484 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms

Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano

Abstract:

In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.

Keywords: heuristic, MIP model, remedial course, school, timetabling

Procedia PDF Downloads 584
18483 Building a Lean Construction Body of Knowledge

Authors: Jyoti Singh, Ahmed Stifi, Sascha Gentes

Abstract:

The process of construction significantly contributes to high level of risks, complexity and uncertainties leading to cost and time overrun, customer dissatisfaction etc. lean construction is important as it is a comprehensive system of tools and concepts focusing on moving closer to customer satisfaction by understanding the process, identifying the waste and eliminating it. The proposed work includes identification of knowledge areas from lean perspective, lean tools/concepts used in lean construction and establishing a relationship matrix between knowledge areas and lean tools/concepts, thus developing and building up a lean construction body of knowledge (LCBOK), i.e. a guide to lean construction, aiming to provide guidelines to manage individual projects and also helping construction industry to minimise waste and maximize value to the customer. In this study, we identified 8 knowledge areas and 62 lean tools/concepts from lean perspective and also one tool can help to manage two or more knowledge areas.

Keywords: knowledge areas, lean body matrix, lean construction, lean tools

Procedia PDF Downloads 413
18482 Testing a Flexible Manufacturing System Facility Production Capacity through Discrete Event Simulation: Automotive Case Study

Authors: Justyna Rybicka, Ashutosh Tiwari, Shane Enticott

Abstract:

In the age of automation and computation aiding manufacturing, it is clear that manufacturing systems have become more complex than ever before. Although technological advances provide the capability to gain more value with fewer resources, sometimes utilisation of the manufacturing capabilities available to organisations is difficult to achieve. Flexible manufacturing systems (FMS) provide a unique capability to manufacturing organisations where there is a need for product range diversification by providing line efficiency through production flexibility. This is very valuable in trend driven production set-ups or niche volume production requirements. Although FMS provides flexible and efficient facilities, its optimal set-up is key in achieving production performance. As many variables are interlinked due to the flexibility provided by the FMS, analytical calculations are not always sufficient to predict the FMS’ performance. Simulation modelling is capable of capturing the complexity and constraints associated with FMS. This paper demonstrates how discrete event simulation (DES) can address complexity in an FMS to optimise the production line performance. A case study of an automotive FMS is presented. The DES model demonstrates different configuration options depending on prioritising objectives: utilisation and throughput. Additionally, this paper provides insight into understanding the impact of system set-up constraints on the FMS performance and demonstrates the exploration into the optimal production set-up.

Keywords: discrete event simulation, flexible manufacturing system, capacity performance, automotive

Procedia PDF Downloads 308
18481 Comparative Dielectric Properties of 1,2-Dichloroethane with n-Methylformamide and n,n-Dimethylformamide Using Time Domain Reflectometry Technique in Microwave Frequency

Authors: Shagufta Tabassum, V. P. Pawar, jr., G. N. Shinde

Abstract:

The study of dielectric relaxation properties of polar liquids in the binary mixture has been carried out at 10, 15, 20 and 25 ºC temperatures for 11 different concentrations using time domain reflectometry technique. The dielectric properties of a solute-solvent mixture of polar liquids in the frequency range of 10 MHz to 30 GHz gives the information regarding formation of monomers and multimers and also an interaction between the molecules of the liquid mixture under study. The dielectric parameters have been obtained by the least squares fit method using the Debye equation characterized by a single relaxation time without relaxation time distribution.

Keywords: excess properties, relaxation time, static dielectric constant, and time domain reflectometry technique

Procedia PDF Downloads 129
18480 The Role of Arousal in Time Perception: Implications for Emotional Driving

Authors: Ewa Siedlecka

Abstract:

Emotional stress is an important risk factor in the rate and severity of traffic accidents. Moreover, incorrect time perception is implicated in the increase of traffic violations, such as running red lights or collisions. While the role of emotional arousal on perceived time is well-established, the role of physiological arousal in time perception remains unexamined. Specific emotions can be, however, associated with distinct physiological responses. In the current research, two studies examined the role of physiological arousal in time perception. In the first experiment, 41 participants engaged in a cold pressor task and had their time perception measured throughout the experiment. In the second study, 138 participants engaged in either isometric or deep breathing exercises. These activities were designed to simulate the sympathetic and parasympathetic nervous systems, respectively. Participants completed a bisection task to measure time perception in both studies, as well as a physiological response via an Electrocardiography (ECG). Results found that activation of the parasympathetic nervous system is associated with greater time perception. These findings are discussed with reference to models of time perception, as well as implications for emotional driving and misperceptions of speed. It is important to consider the role of physiology in the misperception of time, as these factors can lead to increases in driving accidents.

Keywords: emotions, nervous system, physiology, time perception

Procedia PDF Downloads 298
18479 The Impact of Major Accounting Events on Managerial Ability and the Accuracy of Environmental Capital Expenditure Projections of the Environmentally Sensitive Industries

Authors: Jason Chen, Jennifer Chen, Shiyu Li

Abstract:

We examine whether managerial ability (MA), the passing of Sarbanes-Oxley in 2002 (SOX), and corporate operational complexity affect the accuracy of environmental capital expenditure projections of the environmentally sensitive industries (ESI). Prior studies found that firms in the ESI manipulated their projected environmental capital expenditures as a tool to achieve corporate legitimation and suggested that human factors must be examined to determine whether they are part of the determinants. We use MA to proxy for the latent human factors to examine whether MA affects the accuracy of financial disclosures in the ESI. To expand Chen and Chen (2020), we further investigate whether (1) SOX and (2) firms with complex operations and financial reporting in conjunction with MA affect firms’ projection accuracy. We find, overall, that MA is positively correlated with firm’s projection accuracy in the annual 10-Ks. Furthermore, results suggest that SOX has a positive, yet temporary, effect on MA, and that leads to better accuracy. Finally, MA matters for firms with more complex operations and financial reporting to make less projection errors than their less-complex counterparts. These results suggest that MA is a determinant that affects the accuracy of environmental capital expenditure projections for the firms in the ESI.

Keywords: managerial ability, environmentally sensitive industries, sox, corporate operational complexity

Procedia PDF Downloads 117
18478 A Minimum Spanning Tree-Based Method for Initializing the K-Means Clustering Algorithm

Authors: J. Yang, Y. Ma, X. Zhang, S. Li, Y. Zhang

Abstract:

The traditional k-means algorithm has been widely used as a simple and efficient clustering method. However, the algorithm often converges to local minima for the reason that it is sensitive to the initial cluster centers. In this paper, an algorithm for selecting initial cluster centers on the basis of minimum spanning tree (MST) is presented. The set of vertices in MST with same degree are regarded as a whole which is used to find the skeleton data points. Furthermore, a distance measure between the skeleton data points with consideration of degree and Euclidean distance is presented. Finally, MST-based initialization method for the k-means algorithm is presented, and the corresponding time complexity is analyzed as well. The presented algorithm is tested on five data sets from the UCI Machine Learning Repository. The experimental results illustrate the effectiveness of the presented algorithm compared to three existing initialization methods.

Keywords: degree, initial cluster center, k-means, minimum spanning tree

Procedia PDF Downloads 381
18477 Developing Offshore Energy Grids in Norway as Capability Platforms

Authors: Vidar Hepsø

Abstract:

The energy and oil companies on the Norwegian Continental shelf come from a situation where each asset control and manage their energy supply (island mode) and move towards a situation where the assets need to collaborate and coordinate energy use with others due to increased cost and scarcity of electric energy sharing the energy that is provided. Currently, several areas are electrified either with an onshore grid cable or are receiving intermittent energy from offshore wind-parks. While the onshore grid in Norway is well regulated, the offshore grid is still in the making, with several oil and gas electrification projects and offshore wind development just started. The paper will describe the shift in the mindset that comes with operating this new offshore grid. This transition process heralds an increase in collaboration across boundaries and integration of energy management across companies, businesses, technical disciplines, and engagement with stakeholders in the larger society. This transition will be described as a function of the new challenges with increased complexity of the energy mix (wind, oil/gas, hydrogen and others) coupled with increased technical and organization complexity in energy management. Organizational complexity denotes an increasing integration across boundaries, whether these boundaries are company, vendors, professional disciplines, regulatory regimes/bodies, businesses, and across numerous societal stakeholders. New practices must be developed, made legitimate and institutionalized across these boundaries. Only parts of this complexity can be mitigated technically, e.g.: by use of batteries, mixing energy systems and simulation/ forecasting tools. Many challenges must be mitigated with legitimated societal and institutionalized governance practices on many levels. Offshore electrification supports Norway’s 2030 climate targets but is also controversial since it is exploiting the larger society’s energy resources. This means that new systems and practices must also be transparent, not only for the industry and the authorities, but must also be acceptable and just for the larger society. The paper report from ongoing work in Norway, participant observation and interviews in projects and people working with offshore grid development in Norway. One case presented is the development of an offshore floating windfarm connected to two offshore installations and the second case is an offshore grid development initiative providing six installations electric energy via an onshore cable. The development of the offshore grid is analyzed using a capability platform framework, that describes the technical, competence, work process and governance capabilities that are under development in Norway. A capability platform is a ‘stack’ with the following layers: intelligent infrastructure, information and collaboration, knowledge sharing & analytics and finally business operations. The need for better collaboration and energy forecasting tools/capabilities in this stack will be given a special attention in the two use cases that are presented.

Keywords: capability platform, electrification, carbon footprint, control rooms, energy forecsting, operational model

Procedia PDF Downloads 45
18476 Linear Quadratic Gaussian/Loop Transfer Recover Control Flight Control on a Nonlinear Model

Authors: T. Sanches, K. Bousson

Abstract:

As part of the development of a 4D autopilot system for unmanned aerial vehicles (UAVs), i.e. a time-dependent robust trajectory generation and control algorithm, this work addresses the problem of optimal path control based on the flight sensors data output that may be unreliable due to noise on data acquisition and/or transmission under certain circumstances. Although several filtering methods, such as the Kalman-Bucy filter or the Linear Quadratic Gaussian/Loop Transfer Recover Control (LQG/LTR), are available, the utter complexity of the control system, together with the robustness and reliability required of such a system on a UAV for airworthiness certifiable autonomous flight, required the development of a proper robust filter for a nonlinear system, as a way of further mitigate errors propagation to the control system and improve its ,performance. As such, a nonlinear algorithm based upon the LQG/LTR, is validated through computational simulation testing, is proposed on this paper.

Keywords: autonomous flight, LQG/LTR, nonlinear state estimator, robust flight control

Procedia PDF Downloads 112
18475 Potentials of Additive Manufacturing: An Approach to Increase the Flexibility of Production Systems

Authors: A. Luft, S. Bremen, N. Balc

Abstract:

The task of flexibility planning and design, just like factory planning, for example, is to create the long-term systemic framework that constitutes the restriction for short-term operational management. This is a strategic challenge since, due to the decision defect character of the underlying flexibility problem, multiple types of flexibility need to be considered over the course of various scenarios, production programs, and production system configurations. In this context, an evaluation model has been developed that integrates both conventional and additive resources on a basic task level and allows the quantification of flexibility enhancement in terms of mix and volume flexibility, complexity reduction, and machine capacity. The model helps companies to decide in early decision-making processes about the potential gains of implementing additive manufacturing technologies on a strategic level. For companies, it is essential to consider both additive and conventional manufacturing beyond pure unit costs. It is necessary to achieve an integrative view of manufacturing that incorporates both additive and conventional manufacturing resources and quantifies their potential with regard to flexibility and manufacturing complexity. This also requires a structured process for the strategic production systems design that spans the design of various scenarios and allows for multi-dimensional and comparative analysis. A respective guideline for the planning of additive resources on a strategic level is being laid out in this paper.

Keywords: additive manufacturing, production system design, flexibility enhancement, strategic guideline

Procedia PDF Downloads 98
18474 A Hyperexponential Approximation to Finite-Time and Infinite-Time Ruin Probabilities of Compound Poisson Processes

Authors: Amir T. Payandeh Najafabadi

Abstract:

This article considers the problem of evaluating infinite-time (or finite-time) ruin probability under a given compound Poisson surplus process by approximating the claim size distribution by a finite mixture exponential, say Hyperexponential, distribution. It restates the infinite-time (or finite-time) ruin probability as a solvable ordinary differential equation (or a partial differential equation). Application of our findings has been given through a simulation study.

Keywords: ruin probability, compound poisson processes, mixture exponential (hyperexponential) distribution, heavy-tailed distributions

Procedia PDF Downloads 316
18473 Multilayer Neural Network and Fuzzy Logic Based Software Quality Prediction

Authors: Sadaf Sahar, Usman Qamar, Sadaf Ayaz

Abstract:

In the software development lifecycle, the quality prediction techniques hold a prime importance in order to minimize future design errors and expensive maintenance. There are many techniques proposed by various researchers, but with the increasing complexity of the software lifecycle model, it is crucial to develop a flexible system which can cater for the factors which in result have an impact on the quality of the end product. These factors include properties of the software development process and the product along with its operation conditions. In this paper, a neural network (perceptron) based software quality prediction technique is proposed. Using this technique, the stakeholders can predict the quality of the resulting software during the early phases of the lifecycle saving time and resources on future elimination of design errors and costly maintenance. This technique can be brought into practical use using successful training.

Keywords: software quality, fuzzy logic, perception, prediction

Procedia PDF Downloads 290
18472 Real-Time Visualization Using GPU-Accelerated Filtering of LiDAR Data

Authors: Sašo Pečnik, Borut Žalik

Abstract:

This paper presents a real-time visualization technique and filtering of classified LiDAR point clouds. The visualization is capable of displaying filtered information organized in layers by the classification attribute saved within LiDAR data sets. We explain the used data structure and data management, which enables real-time presentation of layered LiDAR data. Real-time visualization is achieved with LOD optimization based on the distance from the observer without loss of quality. The filtering process is done in two steps and is entirely executed on the GPU and implemented using programmable shaders.

Keywords: filtering, graphics, level-of-details, LiDAR, real-time visualization

Procedia PDF Downloads 277
18471 Construction Time - Cost Trade-Off Analysis Using Fuzzy Set Theory

Authors: V. S. S. Kumar, B. Vikram, G. C. S. Reddy

Abstract:

Time and cost are the two critical objectives of construction project management and are not independent but intricately related. Trade-off between project duration and cost are extensively discussed during project scheduling because of practical relevance. Generally when the project duration is compressed, the project calls for an increase in labor and more productive equipments, which increases the cost. Thus, the construction time-cost optimization is defined as a process to identify suitable construction activities for speeding up to attain the best possible savings in both time and cost. As there is hidden tradeoff relationship between project time and cost, it might be difficult to predict whether the total cost would increase or decrease as a result of compressing the schedule. Different combinations of duration and cost for the activities associated with the project determine the best set in the time-cost optimization. Therefore, the contractors need to select the best combination of time and cost to perform each activity, all of which will ultimately determine the project duration and cost. In this paper, the fuzzy set theory is used to model the uncertainties in the project environment for time-cost trade off analysis.

Keywords: fuzzy sets, uncertainty, qualitative factors, decision making

Procedia PDF Downloads 625
18470 Distributed Perceptually Important Point Identification for Time Series Data Mining

Authors: Tak-Chung Fu, Ying-Kit Hung, Fu-Lai Chung

Abstract:

In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions.

Keywords: distributed computing, performance analysis, Perceptually Important Point identification, time series data mining

Procedia PDF Downloads 403
18469 Pushing the Boundary of Parallel Tractability for Ontology Materialization via Boolean Circuits

Authors: Zhangquan Zhou, Guilin Qi

Abstract:

Materialization is an important reasoning service for applications built on the Web Ontology Language (OWL). To make materialization efficient in practice, current research focuses on deciding tractability of an ontology language and designing parallel reasoning algorithms. However, some well-known large-scale ontologies, such as YAGO, have been shown to have good performance for parallel reasoning, but they are expressed in ontology languages that are not parallelly tractable, i.e., the reasoning is inherently sequential in the worst case. This motivates us to study the problem of parallel tractability of ontology materialization from a theoretical perspective. That is we aim to identify the ontologies for which materialization is parallelly tractable, i.e., in the NC complexity. Since the NC complexity is defined based on Boolean circuit that is widely used to investigate parallel computing problems, we first transform the problem of materialization to evaluation of Boolean circuits, and then study the problem of parallel tractability based on circuits. In this work, we focus on datalog rewritable ontology languages. We use Boolean circuits to identify two classes of datalog rewritable ontologies (called parallelly tractable classes) such that materialization over them is parallelly tractable. We further investigate the parallel tractability of materialization of a datalog rewritable OWL fragment DHL (Description Horn Logic). Based on the above results, we analyze real-world datasets and show that many ontologies expressed in DHL belong to the parallelly tractable classes.

Keywords: ontology materialization, parallel reasoning, datalog, Boolean circuit

Procedia PDF Downloads 246
18468 A Corpus-Based Study on the Lexical, Syntactic and Sequential Features across Interpreting Types

Authors: Qianxi Lv, Junying Liang

Abstract:

Among the various modes of interpreting, simultaneous interpreting (SI) is regarded as a ‘complex’ and ‘extreme condition’ of cognitive tasks while consecutive interpreters (CI) do not have to share processing capacity between tasks. Given that SI exerts great cognitive demand, it makes sense to posit that the output of SI may be more compromised than that of CI in the linguistic features. The bulk of the research has stressed the varying cognitive demand and processes involved in different modes of interpreting; however, related empirical research is sparse. In keeping with our interest in investigating the quantitative linguistic factors discriminating between SI and CI, the current study seeks to examine the potential lexical simplification, syntactic complexity and sequential organization mechanism with a self-made inter-model corpus of transcribed simultaneous and consecutive interpretation, translated speech and original speech texts with a total running word of 321960. The lexical features are extracted in terms of the lexical density, list head coverage, hapax legomena, and type-token ratio, as well as core vocabulary percentage. Dependency distance, an index for syntactic complexity and reflective of processing demand is employed. Frequency motif is a non-grammatically-bound sequential unit and is also used to visualize the local function distribution of interpreting the output. While SI is generally regarded as multitasking with high cognitive load, our findings evidently show that CI may impose heavier or taxing cognitive resource differently and hence yields more lexically and syntactically simplified output. In addition, the sequential features manifest that SI and CI organize the sequences from the source text in different ways into the output, to minimize the cognitive load respectively. We reasoned the results in the framework that cognitive demand is exerted both on maintaining and coordinating component of Working Memory. On the one hand, the information maintained in CI is inherently larger in volume compared to SI. On the other hand, time constraints directly influence the sentence reformulation process. The temporal pressure from the input in SI makes the interpreters only keep a small chunk of information in the focus of attention. Thus, SI interpreters usually produce the output by largely retaining the source structure so as to relieve the information from the working memory immediately after formulated in the target language. Conversely, CI interpreters receive at least a few sentences before reformulation, when they are more self-paced. CI interpreters may thus tend to retain and generate the information in a way to lessen the demand. In other words, interpreters cope with the high demand in the reformulation phase of CI by generating output with densely distributed function words, more content words of higher frequency values and fewer variations, simpler structures and more frequently used language sequences. We consequently propose a revised effort model based on the result for a better illustration of cognitive demand during both interpreting types.

Keywords: cognitive demand, corpus-based, dependency distance, frequency motif, interpreting types, lexical simplification, sequential units distribution, syntactic complexity

Procedia PDF Downloads 146
18467 An Output Oriented Super-Efficiency Model for Considering Time Lag Effect

Authors: Yanshuang Zhang, Byungho Jeong

Abstract:

There exists some time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in calculating efficiency of decision making units (DMU). Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. This problem can be resolved a super-efficiency model. However, a super efficiency model sometimes causes infeasibility problem. This paper suggests an output oriented super-efficiency model for efficiency evaluation under the consideration of time lag effect. A case example using a long term research project is given to compare the suggested model with the MpO model

Keywords: DEA, Super-efficiency, Time Lag, research activities

Procedia PDF Downloads 628
18466 Unstructured Learning: Development of Free Form Construction in Waldorf and Normative Preschools

Authors: Salam Kodsi

Abstract:

In this research, we sought to focus on constructive play and examine its components in the context of two different educational approaches: Waldorf and normative schools. When they are free to choose, construction is one of the forms of play most favored by children. Its short-term and long-term cognitive contributions are apparent in various areas of development. The lack of empirical studies about play in Waldorf schools, which addresses the possibility of this incidental learning inspired the need to enrich the body of existing knowledge. 90 children (4-6 yrs.old) four preschools ( two normative, two Waldorf) participated in a small homogeneous city. Naturalistic observations documented the time frame, physical space, and construction materials related to the freeform building; processes of construction among focal representative children and its products. The study’s main finding with respect to the construction output points to a connection between educational approach and level of construction sophistication. Higher levels of sophistication were found at the Waldorf preschools than at the mainstream preschools. This finding emerged due to the differences in the level of sophistication among the older children in the two types of preschools, while practically no differences emerged among the younger children. Discussion of the research findings considered the differences between the play environments in terms of time, physical space, and construction materials. The construction processes were characterized according to the design model stages. The construction output was characterized according to the sophistication scale dimensions and the connections between approach, age and gender, and sophistication level.

Keywords: constructive play, preschool, design process model, complexity

Procedia PDF Downloads 90
18465 A Super-Efficiency Model for Evaluating Efficiency in the Presence of Time Lag Effect

Authors: Yanshuang Zhang, Byungho Jeong

Abstract:

In many cases, there is a time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in evaluating the performance of organizations. Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. Multi-periods input(MpI) and Multi-periods output(MpO) models are integrated models to calculate simple efficiency considering time lag effect. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. That is, efficient DMUs can’t be discriminated because their efficiency scores are same. Thus, this paper suggests a super-efficiency model for efficiency evaluation under the consideration of time lag effect based on the MpO model. A case example using a long-term research project is given to compare the suggested model with the MpO model.

Keywords: DEA, super-efficiency, time lag, multi-periods input

Procedia PDF Downloads 448
18464 "IS Cybernetics": An Idea to Base the International System Theory upon the General System Theory and Cybernetics

Authors: Petra Suchovska

Abstract:

The spirit of post-modernity remains chaotic and obscure. Geopolitical rivalries raging at the more extreme levels and the ability of intellectual community to explain the entropy of global affairs has been diminishing. The Western-led idea of globalisation imposed upon the world does not seem to bring the bright future for human progress anymore, and its architects lose much of global control, as the strong non-western cultural entities develop new forms of post-modern establishments. The overall growing cultural misunderstanding and mistrust are expressions of political impotence to deal with the inner contradictions within the contemporary phenomenon (capitalism, economic globalisation) that embrace global society. The drivers and effects of global restructuring must be understood in the context of systems and principles reflecting on true complexity of society. The purpose of this paper is to set out some ideas about how cybernetics can contribute to understanding international system structure and analyse possible world futures. “IS Cybernetics” would apply to system thinking and cybernetic principles in IR in order to analyse and handle the complexity of social phenomena from global perspective. “IS cybernetics” would be, for now, the subfield of IR, concerned with applying theories and methodologies from cybernetics and system sciences by offering concepts and tools for addressing problems holistically. It would bring order to the complex relations between disciplines that IR touches upon. One of its tasks would be to map, measure, tackle and find the principles of dynamics and structure of social forces that influence human behaviour and consequently cause political, technological and economic structural reordering, forming and reforming the international system. “IS cyberneticists” task would be to understand the control mechanisms that govern the operation of international society (and the sub-systems in their interconnection) and only then suggest better ways operate these mechanisms on sublevels as cultural, political, technological, religious and other. “IS cybernetics” would also strive to capture the mechanism of social-structural changes in time, which would open space for syntheses between IR and historical sociology. With the cybernetic distinction between first order studies of observed systems and the second order study of observing systems, IS cybernetics would also provide a unifying epistemological and methodological, conceptual framework for multilateralism and multiple modernities theory.

Keywords: cybernetics, historical sociology, international system, systems theory

Procedia PDF Downloads 206
18463 Wireless FPGA-Based Motion Controller Design by Implementing 3-Axis Linear Trajectory

Authors: Kiana Zeighami, Morteza Ozlati Moghadam

Abstract:

Designing a high accuracy and high precision motion controller is one of the important issues in today’s industry. There are effective solutions available in the industry but the real-time performance, smoothness and accuracy of the movement can be further improved. This paper discusses a complete solution to carry out the movement of three stepper motors in three dimensions. The objective is to provide a method to design a fully integrated System-on-Chip (SOC)-based motion controller to reduce the cost and complexity of production by incorporating Field Programmable Gate Array (FPGA) into the design. In the proposed method the FPGA receives its commands from a host computer via wireless internet communication and calculates the motion trajectory for three axes. A profile generator module is designed to realize the interpolation algorithm by translating the position data to the real-time pulses. This paper discusses an approach to implement the linear interpolation algorithm, since it is one of the fundamentals of robots’ movements and it is highly applicable in motion control industries. Along with full profile trajectory, the triangular drive is implemented to eliminate the existence of error at small distances. To integrate the parallelism and real-time performance of FPGA with the power of Central Processing Unit (CPU) in executing complex and sequential algorithms, the NIOS II soft-core processor was added into the design. This paper presents different operating modes such as absolute, relative positioning, reset and velocity modes to fulfill the user requirements. The proposed approach was evaluated by designing a custom-made FPGA board along with a mechanical structure. As a result, a precise and smooth movement of stepper motors was observed which proved the effectiveness of this approach.

Keywords: 3-axis linear interpolation, FPGA, motion controller, micro-stepping

Procedia PDF Downloads 188
18462 Project Knowledge Harvesting: The Case of Improving Project Performance through Project Knowledge Sharing Framework

Authors: Eng Rima Al-Awadhi, Abdul Jaleel Tharayil

Abstract:

In a project-centric organization like KOC, managing the knowledge of the project is of critical importance to the success of the project and the organization. However, due to the very nature and complexity involved, each project engagement generates a lot of 'learnings' that need to be factored into while new projects are initiated and thus avoid repeating the same mistake. But, many a time these learnings are localized and remains as ‘tacit knowledge’ leading to scope re-work, schedule overrun, adjustment orders, concession requests and claims. While KOC follows an asset based organization structure, with a multi-cultural and multi-ethnic workforce and larger chunk of the work is carried out through complex, long term project engagement, diffusion of ‘learnings’ across assets while dealing with the natural entropy of the organization is of great significance. Considering the relatively higher number of mega projects, it's important that the issues raised during the project life cycle are centrally harvested, analyzed and the ‘learnings’ from these issues are shared, absorbed and are in-turn utilized to enhance and refine the existing process and practices, leading to improve the project performance. One of the many factors contributing to the successful completion of a project on time is the reduction in the number of variations or concessions triggered during the project life cycle. The project process integrated knowledge sharing framework discusses the knowledge harvesting methodology adopted, the challenges faced, learnings acquired and its impact on project performance. The framework facilitates the proactive identification of issues that may have an impact on the overall quality of the project and improve performance.

Keywords: knowledge harvesting, project integrated knowledge sharing, performance improvement, knowledge management, lessons learn

Procedia PDF Downloads 365
18461 Time Compression in Engineer-to-Order Industry: A Case Study of a Norwegian Shipbuilding Industry

Authors: Tarek Fatouh, Chehab Elbelehy, Alaa Abdelsalam, Eman Elakkad, Alaa Abdelshafie

Abstract:

This paper aims to explore the possibility of time compression in Engineer to Order production networks. A case study research method is used in a Norwegian shipbuilding project by implementing a value stream mapping lean tool with total cycle time as a unit of analysis. The analysis resulted in demonstrating the time deviations for the planned tasks in one of the processes in the shipbuilding project. So, authors developed a future state map by removing time wastes from value stream process.

Keywords: engineer to order, total cycle time, value stream mapping, shipbuilding

Procedia PDF Downloads 134
18460 Urban Networks as Model of Sustainable Design

Authors: Agryzkov Taras, Oliver Jose L., Tortosa Leandro, Vicent Jose

Abstract:

This paper aims to demonstrate how the consideration of cities as a special kind of complex network, called urban network, may lead to the use of design tools coming from network theories which, in fact, results in a quite sustainable approach. There is no doubt that the irruption in contemporary thought of Gaia as an essential political agent proposes a narrative that has been extended to the field of creative processes in which, of course, the activity of Urban Design is found. The rationalist paradigm is put in crisis, and from the so-called sciences of complexity, its way of describing reality and of intervening in it is questioned. Thus, a new way of understanding reality surges, which has to do with a redefinition of the human being's own place in what is now understood as a delicate and complex network. In this sense, we know that in these systems of connected and interdependent elements, the influences generated by them originate emergent properties and behaviors for the whole that, individually studied, would not make sense. We believe that the design of cities cannot remain oblivious to these principles, and therefore this research aims to demonstrate the potential that they have for decision-making in the urban environment. Thus, we will see an example of action in the field of public mobility, another example in the design of commercial areas, and a third example in the field of redensification of sprawl areas, in which different aspects of network theory have been applied to change the urban design. We think that even though these actions have been developed in European cities, and more specifically in the Mediterranean area in Spain, the reflections and tools could have a broader scope of action.

Keywords: graphs, complexity sciences, urban networks, urban design

Procedia PDF Downloads 125
18459 Time and Kinematics of Moving Bodies

Authors: Muhammad Omer Farooq Saeed

Abstract:

The purpose of the proposal is to find out what time actually is! And to understand the natural phenomenon of the behavior of time and light corresponding to the motion of the bodies at relatively high speeds. The utmost concern of the paper is to deal with the possible demerits in the equations of relativity, thereby providing some valuable extensions in those equations and concepts. The idea used develops the most basic conception of the relative motion of the body with respect to space and a real understanding of time and the variation of energy of the body in different frames of reference. The results show the development of a completely new understanding of time, relative motion and energy, along with some extensions in the equations of special relativity most importantly the time dilation and the mass-energy relationship that will explain all frames of a body, all in one go. The proposal also raises serious questions on the validity of the “Principle of Equivalence” on which the General Relativity is based, most importantly a serious case of the bending light that eventually goes against its own governing concepts of space-time being proposed in the theory. The results also predict the existence of a completely new field that explains the fact just how and why bodies acquire energy in space-time. This field explains the production of gravitational waves based on time. All in all, this proposal challenges the formulas and conceptions of Special and General Relativity, respectively.

Keywords: time, relative motion, energy, speed, frame of reference, photon, curvature, space-time, time –differentials

Procedia PDF Downloads 46
18458 Comparison of Different Methods to Produce Fuzzy Tolerance Relations for Rainfall Data Classification in the Region of Central Greece

Authors: N. Samarinas, C. Evangelides, C. Vrekos

Abstract:

The aim of this paper is the comparison of three different methods, in order to produce fuzzy tolerance relations for rainfall data classification. More specifically, the three methods are correlation coefficient, cosine amplitude and max-min method. The data were obtained from seven rainfall stations in the region of central Greece and refers to 20-year time series of monthly rainfall height average. Three methods were used to express these data as a fuzzy relation. This specific fuzzy tolerance relation is reformed into an equivalence relation with max-min composition for all three methods. From the equivalence relation, the rainfall stations were categorized and classified according to the degree of confidence. The classification shows the similarities among the rainfall stations. Stations with high similarity can be utilized in water resource management scenarios interchangeably or to augment data from one to another. Due to the complexity of calculations, it is important to find out which of the methods is computationally simpler and needs fewer compositions in order to give reliable results.

Keywords: classification, fuzzy logic, tolerance relations, rainfall data

Procedia PDF Downloads 288