Search results for: Kolmogorov complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1665

Search results for: Kolmogorov complexity

1635 Team Cognitive Heterogeneity and Strategic Decision-Making Flexibility: The Role of Transactive Memory System and Task Complexity

Authors: Rui Xing, Baolin Ye, Nan Zhou, Guohong Wang

Abstract:

Drawing upon a perspective of cognitive interaction, this study explores the relationship between team cognitive heterogeneity and team strategic decision-making flexibility, treating the transactive memory system as a mediator and task complexity as a moderator. The hypotheses were tested in linear regression models by using data gathered from 67 strategic decision-making teams in the new-energy vehicle industry. It is found that team cognitive heterogeneity has a positive impact on strategic decision-making flexibility through the mediation of specialization and coordination of the transactive memory system, which is positively moderated by task complexity.

Keywords: strategic decision-making flexibility, team cognitive heterogeneity, transactive memory system, task complexity

Procedia PDF Downloads 43
1634 Variable Tree Structure QR Decomposition-M Algorithm (QRD-M) in Multiple Input Multiple Output-Orthogonal Frequency Division Multiplexing (MIMO-OFDM) Systems

Authors: Jae-Hyun Ro, Jong-Kwang Kim, Chang-Hee Kang, Hyoung-Kyu Song

Abstract:

In multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, QR decomposition-M algorithm (QRD-M) has suboptimal error performance. However, the QRD-M has still high complexity due to many calculations at each layer in tree structure. To reduce the complexity of the QRD-M, proposed QRD-M modifies existing tree structure by eliminating unnecessary candidates at almost whole layers. The method of the elimination is discarding the candidates which have accumulated squared Euclidean distances larger than calculated threshold. The simulation results show that the proposed QRD-M has same bit error rate (BER) performance with lower complexity than the conventional QRD-M.

Keywords: complexity, MIMO-OFDM, QRD-M, squared Euclidean distance

Procedia PDF Downloads 305
1633 The Effects of Anthropomorphism on Complex Technological Innovations

Authors: Chyi Jaw

Abstract:

Many companies have suffered as a result of consumers’ rejection of complex new products and experienced huge losses in the market. Marketers have to understand what block from new technology adoption or positive product attitude may exist in the market. This research examines the effects of techno-complexity and anthropomorphism on consumer psychology and product attitude when new technologies are introduced to the market. This study conducted a pretest and a 2 x 2 between-subjects experiment. Four simulated experimental web pages were constructed to collect data. The empirical analysis tested the moderation-mediation relationships among techno-complexity, technology anxiety, ability, and product attitude. These empirical results indicate (1) Techno-complexity of an innovation is negatively related to consumers’ product attitude, as well as increases consumers’ technology anxiety and reduces their self-ability perception. (2) Consumers’ technology anxiety and ability perception towards an innovation completely mediate the relationship between techno-complexity and product attitude. (3) Product anthropomorphism is positively related to consumers’ attitude of new technology, and also significantly moderates the effect of techno-complexity in the hypothesized model. In this work, the study presents the moderation-mediation model and the effects of anthropomorphized strategy, which describes how managers can better predict and influence the diffusion of complex technological innovations.

Keywords: ability, anthropomorphic effect, innovation, techno-complexity, technology anxiety

Procedia PDF Downloads 168
1632 Leadership's Controlling via Complexity Investigation in Crisis Scenarios

Authors: Jiří Barta, Oldřich Svoboda, Jiří F. Urbánek

Abstract:

In this paper will be discussed two coin´s sides of crisis scenarios dynamics. On the one's side is negative role of subsidiary scenario branches in its compactness weakening by means unduly chaotic atomizing, having many interactive feedbacks cases, increasing a value of a complexity here. This negative role reflects the complexity of use cases, weakening leader compliancy, which brings something as a ´readiness for controlling capabilities provision´. Leader´s dissatisfaction has zero compliancy, but factual it is a ´crossbar´ (interface in fact) between planning and executing use cases. On the other side of this coin, an advantage of rich scenarios embranchment is possible to see in a support of response awareness, readiness, preparedness, adaptability, creativity and flexibility. Here rich scenarios embranchment contributes to the steadiness and resistance of scenario mission actors. These all will be presented in live power-points ´Blazons´, modelled via DYVELOP (Dynamic Vector Logistics of Processes) on the Conference.

Keywords: leadership, controlling, complexity, DYVELOP, scenarios

Procedia PDF Downloads 376
1631 Handling Complexity of a Complex System Design: Paradigm, Formalism and Transformations

Authors: Hycham Aboutaleb, Bruno Monsuez

Abstract:

Current systems' complexity has reached a degree that requires addressing conception and design issues while taking into account environmental, operational, social, legal, and financial aspects. Therefore, one of the main challenges is the way complex systems are specified and designed. The exponentially growing effort, cost, and time investment of complex systems in modeling phase emphasize the need for a paradigm, a framework, and an environment to handle the system model complexity. For that, it is necessary to understand the expectations of the human user of the model and his limits. This paper presents a generic framework for designing complex systems, highlights the requirements a system model needs to fulfill to meet human user expectations, and suggests a graph-based formalism for modeling complex systems. Finally, a set of transformations are defined to handle the model complexity.

Keywords: higraph-based, formalism, system engineering paradigm, modeling requirements, graph-based transformations

Procedia PDF Downloads 368
1630 Effects of Topic Familiarity on Linguistic Aspects in EFL Learners’ Writing Performance

Authors: Jeong-Won Lee, Kyeong-Ok Yoon

Abstract:

The current study aimed to investigate the effects of topic familiarity and language proficiency on linguistic aspects (lexical complexity, syntactic complexity, accuracy, and fluency) in EFL learners’ argumentative essays. For the study 64 college students were asked to write an argumentative essay for the two different topics (Driving and Smoking) chosen by the consideration of topic familiarity. The students were divided into two language proficiency groups (high-level and intermediate) according to their English writing proficiency. The findings of the study are as follows: 1) the participants of this study exhibited lower levels of lexical and syntactic complexity as well as accuracy when performing writing tasks with unfamiliar topics; and 2) they demonstrated the use of a wider range of vocabulary, and longer and more complex structures, and produced accurate and lengthier texts compared to their intermediate peers. Discussion and pedagogical implications for instruction of writing classes in EFL contexts were addressed.

Keywords: topic familiarity, complexity, accuracy, fluency

Procedia PDF Downloads 20
1629 A Fast Convergence Subband BSS Structure

Authors: Salah Al-Din I. Badran, Samad Ahmadi, Ismail Shahin

Abstract:

A blind source separation method is proposed; in this method we use a non-uniform filter bank and a novel normalisation. This method provides a reduced computational complexity and increased convergence speed comparing to the full-band algorithm. Recently, adaptive sub-band scheme has been recommended to solve two problems: reduction of computational complexity and increase the convergence speed of the adaptive algorithm for correlated input signals. In this work the reduction in computational complexity is achieved with the use of adaptive filters of orders less than the full-band adaptive filters, which operate at a sampling rate lower than the sampling rate of the input signal. The decomposed signals by analysis bank filter are less correlated in each sub-band than the input signal at full bandwidth, and can promote better rates of convergence.

Keywords: blind source separation, computational complexity, subband, convergence speed, mixture

Procedia PDF Downloads 523
1628 Paternity Index Analysis on Disputed Paternity Cases at Sardjito Hospital Yogyakarta, Indonesia

Authors: Taufik Hidayat, Yudha Nurhantari, Bambang U. D. Rianto

Abstract:

Introduction: The examination of the Short Tandem Repeats (STR) locus on nuclear DNA is very useful in solving the paternity cases. The purpose of this study is to know the description of paternity cases and paternity index/probability of paternity analysis based on Indonesian allele frequency at Sardjito Hospital Yogyakarta. Method: This was an observational study with cross-sectional analytic method. Population and sample were all cases of disputed paternity from January 2011 to June 2015 that fulfill the inclusion and exclusion criteria and were examined at Forensic Medicine Unit of Sardjito Hospital, Medical Faculty of Gadjah Mada University. The paternity index was calculated with EasyDNA Program by Fung (2013). Analysis of the study was conducted by comparing the results through unpaired categorical test using Kolmogorov-Smirnov test. This study was designed with 95% confidence interval (CI) with α = 5% and significance level is p < 0,05. Results: From 42 disputed paternity cases we obtained trio paternity cases were 32 cases (76.2%) and duo without a mother was 10 cases (23.8%). The majority of the fathers' estimated ages were 21-30 years (33.3%) and the mother's age was 31-40 years (38.1%). The majority of the ages of children examined for paternity were under 12 months (47.6%). The majority of ethnic clients are Javanese. Conclusion of inclusion was 57.1%, and exclusion was 42.9%. The Kolmogorov-Smirnov test obtained p-value = 0.673. Conclusion: There is no significant difference between paternity index/probability of paternity based on Indonesian allele frequency between trio and duo of paternity.

Keywords: disputed paternity, paternity index, probability of paternity, short tandem

Procedia PDF Downloads 145
1627 Studying Relationship between Local Geometry of Decision Boundary with Network Complexity for Robustness Analysis with Adversarial Perturbations

Authors: Tushar K. Routh

Abstract:

If inputs are engineered in certain manners, they can influence deep neural networks’ (DNN) performances by facilitating misclassifications, a phenomenon well-known as adversarial attacks that question networks’ vulnerability. Recent studies have unfolded the relationship between vulnerability of such networks with their complexity. In this paper, the distinctive influence of additional convolutional layers at the decision boundaries of several DNN architectures was investigated. Here, to engineer inputs from widely known image datasets like MNIST, Fashion MNIST, and Cifar 10, we have exercised One Step Spectral Attack (OSSA) and Fast Gradient Method (FGM) techniques. The aftermaths of adding layers to the robustness of the architectures have been analyzed. For reasoning, separation width from linear class partitions and local geometry (curvature) near the decision boundary have been examined. The result reveals that model complexity has significant roles in adjusting relative distances from margins, as well as the local features of decision boundaries, which impact robustness.

Keywords: DNN robustness, decision boundary, local curvature, network complexity

Procedia PDF Downloads 43
1626 Development of Probability Distribution Models for Degree of Bending (DoB) in Chord Member of Tubular X-Joints under Bending Loads

Authors: Hamid Ahmadi, Amirreza Ghaffari

Abstract:

Fatigue life of tubular joints in offshore structures is not only dependent on the value of hot-spot stress, but is also significantly influenced by the through-the-thickness stress distribution characterized by the degree of bending (DoB). The DoB exhibits considerable scatter calling for greater emphasis in accurate determination of its governing probability distribution which is a key input for the fatigue reliability analysis of a tubular joint. Although the tubular X-joints are commonly found in offshore jacket structures, as far as the authors are aware, no comprehensive research has been carried out on the probability distribution of the DoB in tubular X-joints. What has been used so far as the probability distribution of the DoB in reliability analyses is mainly based on assumptions and limited observations, especially in terms of distribution parameters. In the present paper, results of parametric equations available for the calculation of the DoB have been used to develop probability distribution models for the DoB in the chord member of tubular X-joints subjected to four types of bending loads. Based on a parametric study, a set of samples was prepared and density histograms were generated for these samples using Freedman-Diaconis method. Twelve different probability density functions (PDFs) were fitted to these histograms. The maximum likelihood method was utilized to determine the parameters of fitted distributions. In each case, Kolmogorov-Smirnov test was used to evaluate the goodness of fit. Finally, after substituting the values of estimated parameters for each distribution, a set of fully defined PDFs have been proposed for the DoB in tubular X-joints subjected to bending loads.

Keywords: tubular X-joint, degree of bending (DoB), probability density function (PDF), Kolmogorov-Smirnov goodness-of-fit test

Procedia PDF Downloads 696
1625 Uncovering the Complex Structure of Building Design Process Based on Royal Institute of British Architects Plan of Work

Authors: Fawaz A. Binsarra, Halim Boussabaine

Abstract:

The notion of complexity science has been attracting the interest of researchers and professionals due to the need of enhancing the efficiency of understanding complex systems dynamic and structure of interactions. In addition, complexity analysis has been used as an approach to investigate complex systems that contains a large number of components interacts with each other to accomplish specific outcomes and emerges specific behavior. The design process is considered as a complex action that involves large number interacted components, which are ranked as design tasks, design team, and the components of the design process. Those three main aspects of the building design process consist of several components that interact with each other as a dynamic system with complex information flow. In this paper, the goal is to uncover the complex structure of information interactions in building design process. The Investigating of Royal Institute of British Architects Plan Of Work 2013 information interactions as a case study to uncover the structure and building design process complexity using network analysis software to model the information interaction will significantly enhance the efficiency of the building design process outcomes.

Keywords: complexity, process, building desgin, Riba, design complexity, network, network analysis

Procedia PDF Downloads 489
1624 Determination of Complexity Level in Merged Irregular Transposition Cipher

Authors: Okike Benjamin, Garba Ejd

Abstract:

Today, it has been observed security of information along the superhighway is often compromised by those who are not authorized to have access to such information. In order to ensure the security of information along the superhighway, such information should be encrypted by some means to conceal the real meaning of the information. There are many encryption techniques out there in the market. However, some of these encryption techniques are often easily decrypted by adversaries. The researcher has decided to develop an encryption technique that may be more difficult to decrypt. This may be achieved by splitting the message to be encrypted into parts and encrypting each part separately and swapping the positions before transmitting the message along the superhighway. The method is termed Merged Irregular Transposition Cipher. Also, the research would determine the complexity level in respect to the number of splits of the message.

Keywords: transposition cipher, merged irregular cipher, encryption, complexity level

Procedia PDF Downloads 317
1623 Low-Complexity Multiplication Using Complement and Signed-Digit Recoding Methods

Authors: Te-Jen Chang, I-Hui Pan, Ping-Sheng Huang, Shan-Jen Cheng

Abstract:

In this paper, a fast multiplication computing method utilizing the complement representation method and canonical recoding technique is proposed. By performing complements and canonical recoding technique, the number of partial products can be reduced. Based on these techniques, we propose an algorithm that provides an efficient multiplication method. On average, our proposed algorithm is to reduce the number of k-bit additions from (0.25k+logk/k+2.5) to (k/6 +logk/k+2.5), where k is the bit-length of the multiplicand A and multiplier B. We can therefore efficiently speed up the overall performance of the multiplication. Moreover, if we use the new proposes to compute common-multiplicand multiplication, the computational complexity can be reduced from (0.5 k+2 logk/k+5) to (k/3+2 logk/k+5) k-bit additions.

Keywords: algorithm design, complexity analysis, canonical recoding, public key cryptography, common-multiplicand multiplication

Procedia PDF Downloads 402
1622 Modal Density Influence on Modal Complexity Quantification in Dynamic Systems

Authors: Fabrizio Iezzi, Claudio Valente

Abstract:

The viscous damping in dynamic systems can be proportional or non-proportional. In the first case, the mode shapes are real whereas in the second case they are complex. From an engineering point of view, the complexity of the mode shapes is important in order to quantify the non-proportional damping. Different indices exist to provide estimates of the modal complexity. These indices are or not zero, depending whether the mode shapes are not or are complex. The modal density problem arises in the experimental identification when the dynamic systems have close modal frequencies. Depending on the entity of this closeness, the mode shapes can hold fictitious imaginary quantities that affect the values of the modal complexity indices. The results are the failing in the identification of the real or complex mode shapes and then of the proportional or non-proportional damping. The paper aims to show the influence of the modal density on the values of these indices in case of both proportional and non-proportional damping. Theoretical and pseudo-experimental solutions are compared to analyze the problem according to an appropriate mechanical system.

Keywords: complex mode shapes, dynamic systems identification, modal density, non-proportional damping

Procedia PDF Downloads 359
1621 Green Energy, Fiscal Incentives and Conflicting Signals: Analysing the Challenges Faced in Promoting on Farm Waste to Energy Projects

Authors: Hafez Abdo, Rob Ackrill

Abstract:

Renewable energy (RE) promotion in the UK relies on multiple policy instruments, which are required to overcome the path dependency pressures favouring fossil fuels. These instruments include targeted funding schemes and economy-wide instruments embedded in the tax code. The resulting complexity of incentives raises important questions around the coherence and effectiveness of these instruments for RE generation. This complexity is exacerbated by UK RE policy being nested within EU policy in a multi-level governance (MLG) setting. To gain analytical traction on such complexity, this study will analyse policies promoting the on-farm generation of energy for heat and power, from farm and food waste, via anaerobic digestion. Utilising both primary and secondary data, it seeks to address a particular lacuna in the academic literature. Via a localised, in-depth investigation into the complexity of policy instruments promoting RE, this study will help our theoretical understanding of the challenges that MLG and path dependency pressures present to policymakers of multi-dimensional policies.

Keywords: anaerobic digestion, energy, green, policy, renewable, tax, UK

Procedia PDF Downloads 344
1620 Determination of Complexity Level in Okike's Merged Irregular Transposition Cipher

Authors: Okike Benjami, Garba Ejd

Abstract:

Today, it has been observed security of information along the superhighway is often compromised by those who are not authorized to have access to such information. In other to ensure the security of information along the superhighway, such information should be encrypted by some means to conceal the real meaning of the information. There are many encryption techniques out there in the market. However, some of these encryption techniques are often decrypted by adversaries with ease. The researcher has decided to develop an encryption technique that may be more difficult to decrypt. This may be achieved by splitting the message to be encrypted into parts and encrypting each part separately and swapping the positions before transmitting the message along the superhighway. The method is termed Okike’s Merged Irregular Transposition Cipher. Also, the research would determine the complexity level in respect to the number of splits of the message.

Keywords: transposition cipher, merged irregular cipher, encryption, complexity level

Procedia PDF Downloads 263
1619 Efficient Signal Detection Using QRD-M Based on Channel Condition in MIMO-OFDM System

Authors: Jae-Jeong Kim, Ki-Ro Kim, Hyoung-Kyu Song

Abstract:

In this paper, we propose an efficient signal detector that switches M parameter of QRD-M detection scheme is proposed for MIMO-OFDM system. The proposed detection scheme calculates the threshold by 1-norm condition number and then switches M parameter of QRD-M detection scheme according to channel information. If channel condition is bad, the parameter M is set to high value to increase the accuracy of detection. If channel condition is good, the parameter M is set to low value to reduce complexity of detection. Therefore, the proposed detection scheme has better trade off between BER performance and complexity than the conventional detection scheme. The simulation result shows that the complexity of proposed detection scheme is lower than QRD-M detection scheme with similar BER performance.

Keywords: MIMO-OFDM, QRD-M, channel condition, BER

Procedia PDF Downloads 332
1618 Effect of Phonological Complexity in Children with Specific Language Impairment

Authors: Irfana M., Priyandi Kabasi

Abstract:

Children with specific language impairment (SLI) have difficulty acquiring and using language despite having all the requirements of cognitive skills to support language acquisition. These children have normal non-verbal intelligence, hearing, and oral-motor skills, with no history of social/emotional problems or significant neurological impairment. Nevertheless, their language acquisition lags behind their peers. Phonological complexity can be considered to be the major factor that causes the inaccurate production of speech in this population. However, the implementation of various ranges of complex phonological stimuli in the treatment session of SLI should be followed for a better prognosis of speech accuracy. Hence there is a need to study the levels of phonological complexity. The present study consisted of 7 individuals who were diagnosed with SLI and 10 developmentally normal children. All of them were Hindi speakers with both genders and their age ranged from 4 to 5 years. There were 4 sets of stimuli; among them were minimal contrast vs maximal contrast nonwords, minimal coarticulation vs maximal coarticulation nonwords, minimal contrast vs maximal contrast words and minimal coarticulation vs maximal coarticulation words. Each set contained 10 stimuli and participants were asked to repeat each stimulus. Results showed that production of maximal contrast was significantly accurate, followed by minimal coarticulation, minimal contrast and maximal coarticulation. A similar trend was shown for both word and non-word categories of stimuli. The phonological complexity effect was evident in the study for each participant group. Moreover, present study findings can be implemented for the management of SLI, specifically for the selection of stimuli.

Keywords: coarticulation, minimal contrast, phonological complexity, specific language impairment

Procedia PDF Downloads 111
1617 Theoretical Paradigms for Total Quality Environmental Management (TQEM)

Authors: Mohammad Hossein Khasmafkan Nezam, Nader Chavoshi Boroujeni, Mohamad Reza Veshaghi

Abstract:

Quality management is dominated by rational paradigms for the measurement and management of quality, but these paradigms start to ‘break down’, when faced with the inherent complexity of managing quality in intensely competitive changing environments. In this article, the various theoretical paradigms employed to manage quality are reviewed and the advantages and limitations of these paradigms are highlighted. A major implication of this review is that when faced with complexity, an ideological stance to any single strategy paradigm for total quality environmental management is ineffective. We suggest that as complexity increases and we envisage intensely competitive changing environments there will be a greater need to consider a multi-paradigm integrationist view of strategy for TQEM.

Keywords: total quality management (TQM), total quality environmental management (TQEM), ideologies (philosophy), theoretical paradigms

Procedia PDF Downloads 285
1616 Investigating the Flow Physics within Vortex-Shockwave Interactions

Authors: Frederick Ferguson, Dehua Feng, Yang Gao

Abstract:

No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.

Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme

Procedia PDF Downloads 104
1615 Exploring Leadership Adaptability in the Private Healthcare Organizations in the UK in Times of Crises

Authors: Sade Ogundipe

Abstract:

The private healthcare sector in the United Kingdom has experienced unprecedented challenges during times of crisis, necessitating effective leadership adaptability. This qualitative study delves into the dynamic landscape of leadership within the sector, particularly during crises, employing the lenses of complexity theory and institutional theory to unravel the intricate mechanisms at play. Through in-depth interviews with 25 various levels of leaders in the UK private healthcare sector, this research explores how leaders in UK private healthcare organizations navigate complex and often chaotic environments, shedding light on their adaptive strategies and decision-making processes during crises. Complexity theory is used to analyze the complicated, volatile nature of healthcare crises, emphasizing the need for adaptive leadership in such contexts. Institutional theory, on the other hand, provides insights into how external and internal institutional pressures influence leadership behavior. Findings from this study highlight the multifaceted nature of leadership adaptability, emphasizing the significance of leaders' abilities to embrace uncertainty, engage in sensemaking, and leverage the institutional environment to enact meaningful changes. Furthermore, this research sheds light on the challenges and opportunities that leaders face when adapting to crises within the UK private healthcare sector. The study's insights contribute to the growing body of literature on leadership in healthcare, offering practical implications for leaders, policymakers, and stakeholders within the UK private healthcare sector. By employing the dual perspectives of complexity theory and institutional theory, this research provides a holistic understanding of leadership adaptability in the face of crises, offering valuable guidance for enhancing the resilience and effectiveness of healthcare leadership within this vital sector.

Keywords: leadership, adaptability, decision-making, complexity, complexity theory, institutional theory, organizational complexity, complex adaptive system (CAS), crises, healthcare

Procedia PDF Downloads 19
1614 Configuring Systems to Be Viable in a Crisis: The Role of Intuitive Decision-Making

Authors: Ayham Fattoum, Simos Chari, Duncan Shaw

Abstract:

Volatile, uncertain, complex, and ambiguous (VUCA) conditions threaten systems viability with emerging and novel events requiring immediate and localized responses. Such responsiveness is only possible through devolved freedom and emancipated decision-making. The Viable System Model (VSM) recognizes the need and suggests maximizing autonomy to localize decision-making and minimize residual complexity. However, exercising delegated autonomy in VUCA requires confidence and knowledge to use intuition and guidance to maintain systemic coherence. This paper explores the role of intuition as an enabler of emancipated decision-making and autonomy under VUCA. Intuition allows decision-makers to use their knowledge and experience to respond rapidly to novel events. This paper offers three contributions to VSM. First, it designs a system model that illustrates the role of intuitive decision-making in managing complexity and maintaining viability. Second, it takes a black-box approach to theory development in VSM to model the role of autonomy and intuition. Third, the study uses a multi-stage discovery-oriented approach (DOA) to develop theory, with each stage combining literature, data analysis, and model/theory development and identifying further questions for the subsequent stage. We synthesize literature (e.g., VSM, complexity management) with seven months of field-based insights (interviews, workshops, and observation of a live disaster exercise) to develop a framework of intuitive complexity management framework and VSM models. The results have practical implications for enhancing the resilience of organizations and communities.

Keywords: Intuition, complexity management, decision-making, viable system model

Procedia PDF Downloads 45
1613 Predicting Stack Overflow Accepted Answers Using Features and Models with Varying Degrees of Complexity

Authors: Osayande Pascal Omondiagbe, Sherlock a Licorish

Abstract:

Stack Overflow is a popular community question and answer portal which is used by practitioners to solve technology-related challenges during software development. Previous studies have shown that this forum is becoming a substitute for official software programming languages documentation. While tools have looked to aid developers by presenting interfaces to explore Stack Overflow, developers often face challenges searching through many possible answers to their questions, and this extends the development time. To this end, researchers have provided ways of predicting acceptable Stack Overflow answers by using various modeling techniques. However, less interest is dedicated to examining the performance and quality of typically used modeling methods, and especially in relation to models’ and features’ complexity. Such insights could be of practical significance to the many practitioners that use Stack Overflow. This study examines the performance and quality of various modeling methods that are used for predicting acceptable answers on Stack Overflow, drawn from 2014, 2015 and 2016. Our findings reveal significant differences in models’ performance and quality given the type of features and complexity of models used. Researchers examining classifiers’ performance and quality and features’ complexity may leverage these findings in selecting suitable techniques when developing prediction models.

Keywords: feature selection, modeling and prediction, neural network, random forest, stack overflow

Procedia PDF Downloads 109
1612 Functional Decomposition Based Effort Estimation Model for Software-Intensive Systems

Authors: Nermin Sökmen

Abstract:

An effort estimation model is needed for software-intensive projects that consist of hardware, embedded software or some combination of the two, as well as high level software solutions. This paper first focuses on functional decomposition techniques to measure functional complexity of a computer system and investigates its impact on system development effort. Later, it examines effects of technical difficulty and design team capability factors in order to construct the best effort estimation model. With using traditional regression analysis technique, the study develops a system development effort estimation model which takes functional complexity, technical difficulty and design team capability factors as input parameters. Finally, the assumptions of the model are tested.

Keywords: functional complexity, functional decomposition, development effort, technical difficulty, design team capability, regression analysis

Procedia PDF Downloads 260
1611 Visual Analytics in K 12 Education: Emerging Dimensions of Complexity

Authors: Linnea Stenliden

Abstract:

The aim of this paper is to understand emerging learning conditions, when a visual analytics is implemented and used in K 12 (education). To date, little attention has been paid to the role visual analytics (digital media and technology that highlight visual data communication in order to support analytical tasks) can play in education, and to the extent to which these tools can process actionable data for young students. This study was conducted in three public K 12 schools, in four social science classes with students aged 10 to 13 years, over a period of two to four weeks at each school. Empirical data were generated using video observations and analyzed with help of metaphors by Latour. The learning conditions are found to be distinguished by broad complexity characterized by four dimensions. These emerge from the actors’ deeply intertwined relations in the activities. The paper argues in relation to the found dimensions that novel approaches to teaching and learning could benefit students’ knowledge building as they work with visual analytics, analyzing visualized data.

Keywords: analytical reasoning, complexity, data use, problem space, visual analytics, visual storytelling, translation

Procedia PDF Downloads 333
1610 Reliability of Self-Reported Language Proficiency Measures in l1 Attrition Research: A Closer Look at the Can-Do-Scales.

Authors: Anastasia Sorokina

Abstract:

Self-reported language proficiency measures have been widely used by researchers and have been proven to be an accurate tool to assess actual language proficiency. L1 attrition researchers also rely on self-reported measures. More specifically, can-do-scales has gained popularity in the discipline of L1 attrition research. The can-do-scales usually contain statements about language (e.g., “I can write e-mails”); participants are asked to rate each statement on a scale from 1 (I cannot do it at all) to 5 (I can do it without any difficulties). Despite its popularity, no studies have examined can-do-scales’ reliability at measuring the actual level of L1 attrition. Do can-do-scales positively correlate with lexical diversity, syntactic complexity, and fluency? The present study analyzed speech samples of 35 Russian-English attriters to examine whether their self-reported proficiency correlates with their actual L1 proficiency. The results of Pearson correlation demonstrated that can-do-scales correlated with lexical diversity, syntactic complexity, and fluency. These findings provide a valuable contribution to the L1 attrition research by demonstrating that can-do-scales can be used as a reliable tool to measure L1 attrition.

Keywords: L1 attrition, can-do-scales, lexical diversity, syntactic complexity

Procedia PDF Downloads 201
1609 Nonlinear Analysis in Investigating the Complexity of Neurophysiological Data during Reflex Behavior

Authors: Juliana A. Knocikova

Abstract:

Methods of nonlinear signal analysis are based on finding that random behavior can arise in deterministic nonlinear systems with a few degrees of freedom. Considering the dynamical systems, entropy is usually understood as a rate of information production. Changes in temporal dynamics of physiological data are indicating evolving of system in time, thus a level of new signal pattern generation. During last decades, many algorithms were introduced to assess some patterns of physiological responses to external stimulus. However, the reflex responses are usually characterized by short periods of time. This characteristic represents a great limitation for usual methods of nonlinear analysis. To solve the problems of short recordings, parameter of approximate entropy has been introduced as a measure of system complexity. Low value of this parameter is reflecting regularity and predictability in analyzed time series. On the other side, increasing of this parameter means unpredictability and a random behavior, hence a higher system complexity. Reduced neurophysiological data complexity has been observed repeatedly when analyzing electroneurogram and electromyogram activities during defence reflex responses. Quantitative phrenic neurogram changes are also obvious during severe hypoxia, as well as during airway reflex episodes. Concluding, the approximate entropy parameter serves as a convenient tool for analysis of reflex behavior characterized by short lasting time series.

Keywords: approximate entropy, neurophysiological data, nonlinear dynamics, reflex

Procedia PDF Downloads 278
1608 Fast and Efficient Algorithms for Evaluating Uniform and Nonuniform Lagrange and Newton Curves

Authors: Taweechai Nuntawisuttiwong, Natasha Dejdumrong

Abstract:

Newton-Lagrange Interpolations are widely used in numerical analysis. However, it requires a quadratic computational time for their constructions. In computer aided geometric design (CAGD), there are some polynomial curves: Wang-Ball, DP and Dejdumrong curves, which have linear time complexity algorithms. Thus, the computational time for Newton-Lagrange Interpolations can be reduced by applying the algorithms of Wang-Ball, DP and Dejdumrong curves. In order to use Wang-Ball, DP and Dejdumrong algorithms, first, it is necessary to convert Newton-Lagrange polynomials into Wang-Ball, DP or Dejdumrong polynomials. In this work, the algorithms for converting from both uniform and non-uniform Newton-Lagrange polynomials into Wang-Ball, DP and Dejdumrong polynomials are investigated. Thus, the computational time for representing Newton-Lagrange polynomials can be reduced into linear complexity. In addition, the other utilizations of using CAGD curves to modify the Newton-Lagrange curves can be taken.

Keywords: Lagrange interpolation, linear complexity, monomial matrix, Newton interpolation

Procedia PDF Downloads 198
1607 A Less Complexity Deep Learning Method for Drones Detection

Authors: Mohamad Kassab, Amal El Fallah Seghrouchni, Frederic Barbaresco, Raed Abu Zitar

Abstract:

Detecting objects such as drones is a challenging task as their relative size and maneuvering capabilities deceive machine learning models and cause them to misclassify drones as birds or other objects. In this work, we investigate applying several deep learning techniques to benchmark real data sets of flying drones. A deep learning paradigm is proposed for the purpose of mitigating the complexity of those systems. The proposed paradigm consists of a hybrid between the AdderNet deep learning paradigm and the Single Shot Detector (SSD) paradigm. The goal was to minimize multiplication operations numbers in the filtering layers within the proposed system and, hence, reduce complexity. Some standard machine learning technique, such as SVM, is also tested and compared to other deep learning systems. The data sets used for training and testing were either complete or filtered in order to remove the images with mall objects. The types of data were RGB or IR data. Comparisons were made between all these types, and conclusions were presented.

Keywords: drones detection, deep learning, birds versus drones, precision of detection, AdderNet

Procedia PDF Downloads 144
1606 Towards a Simulation Model to Ensure the Availability of Machines in Maintenance Activities

Authors: Maryam Gallab, Hafida Bouloiz, Youness Chater, Mohamed Tkiouat

Abstract:

The aim of this paper is to present a model based on multi-agent systems in order to manage the maintenance activities and to ensure the reliability and availability of machines just with the required resources (operators, tools). The interest of the simulation is to solve the complexity of the system and to find results without cost or wasting time. An implementation of the model is carried out on the AnyLogic platform to display the defined performance indicators.

Keywords: maintenance, complexity, simulation, multi-agent systems, AnyLogic platform

Procedia PDF Downloads 276