Search results for: asymptotic complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1742

Search results for: asymptotic complexity

1472 THRAP2 Gene Identified as a Candidate Susceptibility Gene of Thyroid Autoimmune Diseases Pedigree in Tunisian Population

Authors: Ghazi Chabchoub, Mouna Feki, Mohamed Abid, Hammadi Ayadi

Abstract:

Autoimmune thyroid diseases (AITDs), including Graves’ disease (GD) and Hashimoto’s thyroiditis (HT), are inherited as complex traits. Genetic factors associated with AITDs have been tentatively identified by candidate gene and genome scanning approaches. We analysed three intragenic microsatellite markers in the thyroid hormone receptor associated protein 2 gene (THRAP2), mapped near D12S79 marker, which have a potential role in immune function and inflammation [THRAP2-1(TG)n, THRAP2-2 (AC)n and THRAP2-3 (AC)n]. Our study population concerned 12 patients affected with AITDs belonging to a multiplex Tunisian family with high prevalence of AITDs. Fluorescent genotyping was carried out on ABI 3100 sequencers (Applied Biosystems USA) with the use of GENESCAN for semi-automated fragment sizing and GENOTYPER peak-calling software. Statistical analysis was performed using the non parametric Lod score (NPL) by Merlin software. Merlin outputs non-parametric NPLall (Z) and LOD scores and their corresponding asymptotic P values. The analysis for three intragenic markers in the THRAP2 gene revealed strong evidence for linkage (NPL=3.68, P=0.00012). Our results suggested the possible role of THRAP2 gene in AITDs susceptibility in this family.

Keywords: autoimmunity, autoimmune disease, genetic, linkage analysis

Procedia PDF Downloads 109
1471 Application of Metric Dimension of Graph in Unraveling the Complexity of Hyperacusis

Authors: Hassan Ibrahim

Abstract:

The prevalence of hyperacusis, an auditory condition characterized by heightened sensitivity to sounds, continues to rise, posing challenges for effective diagnosis and intervention. It is believed that this work deepens will deepens the understanding of hyperacusis etiology by employing graph theory as a novel analytical framework. We constructed a comprehensive graph wherein nodes represent various factors associated with hyperacusis, including aging, head or neck trauma, infection/virus, depression, migraines, ear infection, anxiety, and other potential contributors. Relationships between factors are modeled as edges, allowing us to visualize and quantify the interactions within the etiological landscape of hyperacusis. it employ the concept of the metric dimension of a connected graph to identify key nodes (landmarks) that serve as critical influencers in the interconnected web of hyperacusis causes. This approach offers a unique perspective on the relative importance and centrality of different factors, shedding light on the complex interplay between physiological, psychological, and environmental determinants. Visualization techniques were also employed to enhance the interpretation and facilitate the identification of the central nodes. This research contributes to the growing body of knowledge surrounding hyperacusis by offering a network-centric perspective on its multifaceted causes. The outcomes hold the potential to inform clinical practices, guiding healthcare professionals in prioritizing interventions and personalized treatment plans based on the identified landmarks within the etiological network. Through the integration of graph theory into hyperacusis research, the complexity of this auditory condition was unraveled and pave the way for more effective approaches to its management.

Keywords: auditory condition, connected graph, hyperacusis, metric dimension

Procedia PDF Downloads 18
1470 Hip Resurfacing Makes for Easier Surgery with Better Functional Outcomes at Time of Revision: A Case Controlled Study

Authors: O. O. Onafowokan, K. Anderson, M. R. Norton, R. G. Middleton

Abstract:

Revision total hip arthroplasty (THA) is known to be a challenging procedure with potential for poor outcomes. Due to its lack of metaphyseal encroachment, hip resurfacing arthroplasty (HRA) is classified as a bone conserving procedure. Although the literature postulates that this is an advantage at time of revision surgery, there is no evidence to either support or refute this claim. We identified 129 hips that had undergone HRA and 129 controls undergoing first revision THA. We recorded the clinical assessment and survivorship of implants in a multi-surgeon, single centre, retrospective case control series for both arms. These were matched for age and sex. Data collected included demographics, indications for surgery, Oxford Hip Score (OHS), length of surgery, length of hospital stay, blood transfusion, implant complexity and further surgical procedures. Significance was taken as p < 0.05. Mean follow up was 7.5 years (1 to 15). There was a significant 6 point difference in postoperative OHS in favour of the revision resurfacing group (p=0.0001). The revision HRA group recorded 48 minutes less length of surgery (p<0.0001), 2 days less in length of hospital stay (p=0.018), a reduced need for blood transfusion (p=0.0001), a need for less complexity in revision implants (p=0.001) and a reduced probability of further surgery being required (P=0.003). Whilst we acknowledge the limitations of this study our results suggest that, in contrast to THA, the bone conservation element of HRA may make for a less traumatic revision procedure with better functional outcomes. Use of HRA has seen a dramatic decline as a result of concerns regarding metallosis. However, this information remains of relevance when counselling young active patients about their arthroplasty options and may become pertinent in the future if the promise of ceramic hip resurfacing is ever realized.

Keywords: hip resurfacing, metallosis, revision surgery, total hip arthroplasty

Procedia PDF Downloads 78
1469 Efficient Chess Board Representation: A Space-Efficient Protocol

Authors: Raghava Dhanya, Shashank S.

Abstract:

This paper delves into the intersection of chess and computer science, specifically focusing on the efficient representation of chess game states. We propose two methods: the Static Method and the Dynamic Method, each offering unique advantages in terms of space efficiency and computational complexity. The Static Method aims to represent the game state using a fixedlength encoding, allocating 192 bits to capture the positions of all pieces on the board. This method introduces a protocol for ordering and encoding piece positions, ensuring efficient storage and retrieval. However, it faces challenges in representing pieces no longer in play. In contrast, the Dynamic Method adapts to the evolving game state by dynamically adjusting the encoding length based on the number of pieces in play. By incorporating Alive Bits for each piece kind, this method achieves greater flexibility and space efficiency. Additionally, it includes provisions for encoding additional game state information such as castling rights and en passant squares. Our findings demonstrate that the Dynamic Method offers superior space efficiency compared to traditional Forsyth-Edwards Notation (FEN), particularly as the game progresses and pieces are captured. However, it comes with increased complexity in encoding and decoding processes. In conclusion, this study provides insights into optimizing the representation of chess game states, offering potential applications in chess engines, game databases, and artificial intelligence research. The proposed methods offer a balance between space efficiency and computational overhead, paving the way for further advancements in the field.

Keywords: chess, optimisation, encoding, bit manipulation

Procedia PDF Downloads 32
1468 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 139
1467 Hybrid Precoder Design Based on Iterative Hard Thresholding Algorithm for Millimeter Wave Multiple-Input-Multiple-Output Systems

Authors: Ameni Mejri, Moufida Hajjaj, Salem Hasnaoui, Ridha Bouallegue

Abstract:

The technology advances have most lately made the millimeter wave (mmWave) communication possible. Due to the huge amount of spectrum that is available in MmWave frequency bands, this promising candidate is considered as a key technology for the deployment of 5G cellular networks. In order to enhance system capacity and achieve spectral efficiency, very large antenna arrays are employed at mmWave systems by exploiting array gain. However, it has been shown that conventional beamforming strategies are not suitable for mmWave hardware implementation. Therefore, new features are required for mmWave cellular applications. Unlike traditional multiple-input-multiple-output (MIMO) systems for which only digital precoders are essential to accomplish precoding, MIMO technology seems to be different at mmWave because of digital precoding limitations. Moreover, precoding implements a greater number of radio frequency (RF) chains supporting more signal mixers and analog-to-digital converters. As RF chain cost and power consumption is increasing, we need to resort to another alternative. Although the hybrid precoding architecture has been regarded as the best solution based on a combination between a baseband precoder and an RF precoder, we still do not get the optimal design of hybrid precoders. According to the mapping strategies from RF chains to the different antenna elements, there are two main categories of hybrid precoding architecture. Given as a hybrid precoding sub-array architecture, the partially-connected structure reduces hardware complexity by using a less number of phase shifters, whereas it sacrifices some beamforming gain. In this paper, we treat the hybrid precoder design in mmWave MIMO systems as a problem of matrix factorization. Thus, we adopt the alternating minimization principle in order to solve the design problem. Further, we present our proposed algorithm for the partially-connected structure, which is based on the iterative hard thresholding method. Through simulation results, we show that our hybrid precoding algorithm provides significant performance gains over existing algorithms. We also show that the proposed approach reduces significantly the computational complexity. Furthermore, valuable design insights are provided when we use the proposed algorithm to make simulation comparisons between the hybrid precoding partially-connected structure and the fully-connected structure.

Keywords: alternating minimization, hybrid precoding, iterative hard thresholding, low-complexity, millimeter wave communication, partially-connected structure

Procedia PDF Downloads 307
1466 Text Analysis to Support Structuring and Modelling a Public Policy Problem-Outline of an Algorithm to Extract Inferences from Textual Data

Authors: Claudia Ehrentraut, Osama Ibrahim, Hercules Dalianis

Abstract:

Policy making situations are real-world problems that exhibit complexity in that they are composed of many interrelated problems and issues. To be effective, policies must holistically address the complexity of the situation rather than propose solutions to single problems. Formulating and understanding the situation and its complex dynamics, therefore, is a key to finding holistic solutions. Analysis of text based information on the policy problem, using Natural Language Processing (NLP) and Text analysis techniques, can support modelling of public policy problem situations in a more objective way based on domain experts knowledge and scientific evidence. The objective behind this study is to support modelling of public policy problem situations, using text analysis of verbal descriptions of the problem. We propose a formal methodology for analysis of qualitative data from multiple information sources on a policy problem to construct a causal diagram of the problem. The analysis process aims at identifying key variables, linking them by cause-effect relationships and mapping that structure into a graphical representation that is adequate for designing action alternatives, i.e., policy options. This study describes the outline of an algorithm used to automate the initial step of a larger methodological approach, which is so far done manually. In this initial step, inferences about key variables and their interrelationships are extracted from textual data to support a better problem structuring. A small prototype for this step is also presented.

Keywords: public policy, problem structuring, qualitative analysis, natural language processing, algorithm, inference extraction

Procedia PDF Downloads 577
1465 A Study of Using Multiple Subproblems in Dantzig-Wolfe Decomposition of Linear Programming

Authors: William Chung

Abstract:

This paper is to study the use of multiple subproblems in Dantzig-Wolfe decomposition of linear programming (DW-LP). Traditionally, the decomposed LP consists of one LP master problem and one LP subproblem. The master problem and the subproblem is solved alternatively by exchanging the dual prices of the master problem and the proposals of the subproblem until the LP is solved. It is well known that convergence is slow with a long tail of near-optimal solutions (asymptotic convergence). Hence, the performance of DW-LP highly depends upon the number of decomposition steps. If the decomposition steps can be greatly reduced, the performance of DW-LP can be improved significantly. To reduce the number of decomposition steps, one of the methods is to increase the number of proposals from the subproblem to the master problem. To do so, we propose to add a quadratic approximation function to the LP subproblem in order to develop a set of approximate-LP subproblems (multiple subproblems). Consequently, in each decomposition step, multiple subproblems are solved for providing multiple proposals to the master problem. The number of decomposition steps can be reduced greatly. Note that each approximate-LP subproblem is nonlinear programming, and solving the LP subproblem must faster than solving the nonlinear multiple subproblems. Hence, using multiple subproblems in DW-LP is the tradeoff between the number of approximate-LP subproblems being formed and the decomposition steps. In this paper, we derive the corresponding algorithms and provide some simple computational results. Some properties of the resulting algorithms are also given.

Keywords: approximate subproblem, Dantzig-Wolfe decomposition, large-scale models, multiple subproblems

Procedia PDF Downloads 149
1464 On the Fourth-Order Hybrid Beta Polynomial Kernels in Kernel Density Estimation

Authors: Benson Ade Eniola Afere

Abstract:

This paper introduces a family of fourth-order hybrid beta polynomial kernels developed for statistical analysis. The assessment of these kernels' performance centers on two critical metrics: asymptotic mean integrated squared error (AMISE) and kernel efficiency. Through the utilization of both simulated and real-world datasets, a comprehensive evaluation was conducted, facilitating a thorough comparison with conventional fourth-order polynomial kernels. The evaluation procedure encompassed the computation of AMISE and efficiency values for both the proposed hybrid kernels and the established classical kernels. The consistently observed trend was the superior performance of the hybrid kernels when compared to their classical counterparts. This trend persisted across diverse datasets, underscoring the resilience and efficacy of the hybrid approach. By leveraging these performance metrics and conducting evaluations on both simulated and real-world data, this study furnishes compelling evidence in favour of the superiority of the proposed hybrid beta polynomial kernels. The discernible enhancement in performance, as indicated by lower AMISE values and higher efficiency scores, strongly suggests that the proposed kernels offer heightened suitability for statistical analysis tasks when compared to traditional kernels.

Keywords: AMISE, efficiency, fourth-order Kernels, hybrid Kernels, Kernel density estimation

Procedia PDF Downloads 61
1463 Analytical and Numerical Investigation of Friction-Restricted Growth and Buckling of Elastic Fibers

Authors: Peter L. Varkonyi, Andras A. Sipos

Abstract:

The quasi-static growth of elastic fibers is studied in the presence of distributed contact with an immobile surface, subject to isotropic dry or viscous friction. Unlike classical problems of elastic stability modelled by autonomous dynamical systems with multiple time scales (slowly varying bifurcation parameter, and fast system dynamics), this problem can only be formulated as a non-autonomous system without time scale separation. It is found that the fibers initially converge to a trivial, straight configuration, which is later replaced by divergence reminiscent of buckling phenomena. In order to capture the loss of stability, a new definition of exponential stability against infinitesimal perturbations for systems defined over finite time intervals is developed. A semi-analytical method for the determination of the critical length based on eigenvalue analysis is proposed. The post-critical behavior of the fibers is studied numerically by using variational methods. The emerging post-critical shapes and the asymptotic behavior as length goes to infinity are identified for simple spatial distributions of growth. Comparison with physical experiments indicates reasonable accuracy of the theoretical model. Some applications from modeling plant root growth to the design of soft manipulators in robotics are briefly discussed.

Keywords: buckling, elastica, friction, growth

Procedia PDF Downloads 180
1462 The Power of Words: The Use of Language in Ethan Frome

Authors: Ritu Sharma

Abstract:

In order to be objective, critics must examine the dynamic relationships between the author, the reader, the text, and the outside world. However, it is also crucial to recognize that because the language was created by God, meaning is ingrained in it. Meaning is located in and discovered through literature rather than being limited to the author, reader, text, or the outside world. The link between the author, the reader, and the text is crucial because literature unites an author and a reader through the use of language. Literature is a potent kind of communication, and Ethan Frome's audience is forever changed as a result of the book's language and the language its characters use. The narrative of Ethan Frome and his wife Zeena is presented in Ethan Frome. Ethan's story is told throughout the course of the book, revealed through the eyes of the narrator, an outsider passing through Starkfield, as well as through the insight that the narrator gains from the townspeople and his stay on the Frome farm. The story is set in the rural New England community of Starkfield, Massachusetts. The weather provides the ideal setting for Ethan and the narrator to get to know one another as the narrator gets preoccupied with unraveling the narrative that underlies Ethan's physical anomalies. In addition to telling a gripping tale and capturing human nature as it is, Ethan Frome uses its storyline to achieve something more significant. The book by Edith Wharton supports language. Zeena's deliberate and convincing language challenges relativity and meaninglessness. Ethan and Mattie's effort to effectively use words reflects the complexity of language, and their battle illustrates the influence that language may have if and when it is used. Ethan Frome defends the written word, the foundation upon which it is constructed, as a literary work. Communication is based on language, and as the characters respond to and get involved in disputes throughout the book, Zeena, Ethan, and Mattie, each reflects particular theories of communication that help define their uses of communication within the broader context of language.

Keywords: dynamic relationships, potent, communication, complexity

Procedia PDF Downloads 77
1461 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 516
1460 A Study of Two Disease Models: With and Without Incubation Period

Authors: H. C. Chinwenyi, H. D. Ibrahim, J. O. Adekunle

Abstract:

The incubation period is defined as the time from infection with a microorganism to development of symptoms. In this research, two disease models: one with incubation period and another without incubation period were studied. The study involves the use of a  mathematical model with a single incubation period. The test for the existence and stability of the disease free and the endemic equilibrium states for both models were carried out. The fourth order Runge-Kutta method was used to solve both models numerically. Finally, a computer program in MATLAB was developed to run the numerical experiments. From the results, we are able to show that the endemic equilibrium state of the model with incubation period is locally asymptotically stable whereas the endemic equilibrium state of the model without incubation period is unstable under certain conditions on the given model parameters. It was also established that the disease free equilibrium states of the model with and without incubation period are locally asymptotically stable. Furthermore, results from numerical experiments using empirical data obtained from Nigeria Centre for Disease Control (NCDC) showed that the overall population of the infected people for the model with incubation period is higher than that without incubation period. We also established from the results obtained that as the transmission rate from susceptible to infected population increases, the peak values of the infected population for the model with incubation period decrease and are always less than those for the model without incubation period.

Keywords: asymptotic stability, Hartman-Grobman stability criterion, incubation period, Routh-Hurwitz criterion, Runge-Kutta method

Procedia PDF Downloads 162
1459 Modelling Hydrological Time Series Using Wakeby Distribution

Authors: Ilaria Lucrezia Amerise

Abstract:

The statistical modelling of precipitation data for a given portion of territory is fundamental for the monitoring of climatic conditions and for Hydrogeological Management Plans (HMP). This modelling is rendered particularly complex by the changes taking place in the frequency and intensity of precipitation, presumably to be attributed to the global climate change. This paper applies the Wakeby distribution (with 5 parameters) as a theoretical reference model. The number and the quality of the parameters indicate that this distribution may be the appropriate choice for the interpolations of the hydrological variables and, moreover, the Wakeby is particularly suitable for describing phenomena producing heavy tails. The proposed estimation methods for determining the value of the Wakeby parameters are the same as those used for density functions with heavy tails. The commonly used procedure is the classic method of moments weighed with probabilities (probability weighted moments, PWM) although this has often shown difficulty of convergence, or rather, convergence to a configuration of inappropriate parameters. In this paper, we analyze the problem of the likelihood estimation of a random variable expressed through its quantile function. The method of maximum likelihood, in this case, is more demanding than in the situations of more usual estimation. The reasons for this lie, in the sampling and asymptotic properties of the estimators of maximum likelihood which improve the estimates obtained with indications of their variability and, therefore, their accuracy and reliability. These features are highly appreciated in contexts where poor decisions, attributable to an inefficient or incomplete information base, can cause serious damages.

Keywords: generalized extreme values, likelihood estimation, precipitation data, Wakeby distribution

Procedia PDF Downloads 129
1458 Search for APN Permutations in Rings ℤ_2×ℤ_2^k

Authors: Daniel Panario, Daniel Santana de Freitas, Brett Stevens

Abstract:

Almost Perfect Nonlinear (APN) permutations with optimal resistance against differential cryptanalysis can be found in several domains. The permutation used in the standard for symmetric cryptography (the AES), for example, is based on a special kind of inversion in GF(28). Although very close to APN (2-uniform), this permutation still contains one number 4 in its differential spectrum, which means that, rigorously, it must be classified as 4-uniform. This fact motivates the search for fully APN permutations in other domains of definition. The extremely high complexity associated to this kind of problem precludes an exhaustive search for an APN permutation with 256 elements to be performed without the support of a suitable mathematical structure. On the other hand, in principle, there is nothing to indicate which mathematically structured domains can effectively help the search, and it is necessary to test several domains. In this work, the search for APN permutations in rings ℤ2×ℤ2k is investigated. After a full, exhaustive search with k=2 and k=3, all possible APN permutations in those rings were recorded, together with their differential profiles. Some very promising heuristics in these cases were collected so that, when used as a basis to prune backtracking for the same search in ℤ2×ℤ8 (search space with size 16! ≅244), just a few tenths of a second were enough to produce an APN permutation in a single CPU. Those heuristics were empirically extrapolated so that they could be applied to a backtracking search for APNs over ℤ2×ℤ16 (search space with size 32! ≅2117). The best permutations found in this search were further refined through Simulated Annealing, with a definition of neighbors suitable to this domain. The best result produced with this scheme was a 3-uniform permutation over ℤ2×ℤ16 with only 24 values equal to 3 in the differential spectrum (all the other 968 values were less than or equal 2, as it should be the case for an APN permutation). Although far from being fully APN, this result is technically better than a 4-uniform permutation and demanded only a few seconds in a single CPU. This is a strong indication that the use of mathematically structured domains, like the rings described in this work, together with heuristics based on smaller cases, can lead to dramatic cuts in the computational resources involved in the complexity of the search for APN permutations in extremely large domains.

Keywords: APN permutations, heuristic searches, symmetric cryptography, S-box design

Procedia PDF Downloads 147
1457 Quoting Jobshops Due Dates Subject to Exogenous Factors in Developing Nations

Authors: Idris M. Olatunde, Kareem B.

Abstract:

In manufacturing systems, especially job shops, service performance is a key factor that determines customer satisfaction. Service performance depends not only on the quality of the output but on the delivery lead times as well. Besides product quality enhancement, delivery lead time must be minimized for optimal patronage. Quoting accurate due dates is sine quo non for job shop operational survival in a global competitive environment. Quoting accurate due dates in job shops has been a herculean task that nearly defiled solutions from many methods employed due to complex jobs routing nature of the system. This class of NP-hard problems possessed no rigid algorithms that can give an optimal solution. Jobshop operational problem is more complex in developing nations due to some peculiar factors. Operational complexity in job shops emanated from political instability, poor economy, technological know-how, and the non-promising socio-political environment. The mentioned exogenous factors were hardly considered in the previous studies on scheduling problem related to due date determination in job shops. This study has filled the gap created in the past studies by developing a dynamic model that incorporated the exogenous factors for accurate determination of due dates for varying jobs complexity. Real data from six job shops selected from the different part of Nigeria, were used to test the efficacy of the model, and the outcomes were analyzed statistically. The results of the analyzes showed that the model is more promising in determining accurate due dates than the traditional models deployed by many job shops in terms of patronage and lead times minimization.

Keywords: due dates prediction, improved performance, customer satisfaction, dynamic model, exogenous factors, job shops

Procedia PDF Downloads 402
1456 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 124
1455 Dynamic Analysis of the Heat Transfer in the Magnetically Assisted Reactor

Authors: Tomasz Borowski, Dawid Sołoducha, Rafał Rakoczy, Marian Kordas

Abstract:

The application of magnetic field is essential for a wide range of technologies or processes (i.e., magnetic hyperthermia, bioprocessing). From the practical point of view, bioprocess control is often limited to the regulation of temperature at constant values favourable to microbial growth. The main aim of this study is to determine the effect of various types of electromagnetic fields (i.e., static or alternating) on the heat transfer in a self-designed magnetically assisted reactor. The experimental set-up is equipped with a measuring instrument which controlled the temperature of the liquid inside the container and supervised the real-time acquisition of all the experimental data coming from the sensors. Temperature signals are also sampled from generator of magnetic field. The obtained temperature profiles were mathematically described and analyzed. The parameters characterizing the response to a step input of a first-order dynamic system were obtained and discussed. For example, the higher values of the time constant means slow signal (in this case, temperature) increase. After the period equal to about five-time constants, the sample temperature nearly reached the asymptotic value. This dynamical analysis allowed us to understand the heating effect under the action of various types of electromagnetic fields. Moreover, the proposed mathematical description can be used to compare the influence of different types of magnetic fields on heat transfer operations.

Keywords: heat transfer, magnetically assisted reactor, dynamical analysis, transient function

Procedia PDF Downloads 161
1454 Geometrical Fluid Model for Blood Rheology and Pulsatile Flow in Stenosed Arteries

Authors: Karan Kamboj, Vikramjeet Singh, Vinod Kumar

Abstract:

Considering blood to be a non-Newtonian Carreau liquid, this indirect numerical model investigates the pulsatile blood flow in a constricted restricted conduit that has numerous gentle stenosis inside the view of an increasing body speed. Asymptotic answers are obtained for the flow rate, pressure inclination, speed profile, sheer divider pressure, and longitudinal impedance to stream after the use of the twofold irritation approach to the problem of the succeeding non-straight limit esteem. It has been observed that the speed of the blood increases when there is an increase in the point of tightening of the conduit, the body speed increase, and the power regulation file. However, this rheological manner of behaving changes to one of longitudinal impedance to stream and divider sheer pressure when each of the previously mentioned boundaries increases. It has also been seen that the sheer divider pressure in the bloodstream greatly increases when there is an increase in the maximum depth of the stenosis but that it significantly decreases when there is an increase in the pulsatile Reynolds number. This is an interesting phenomenon. The assessments of the amount of growth in the longitudinal resistance to flow increase overall with the increment of the maximum depth of the stenosis and the Weissenberg number. Additionally, it is noted that the average speed of blood increases noticeably with the growth of the point of tightening of the corridor, and body speed increases border. This is something that can be observed.

Keywords: geometry of artery, pulsatile blood flow, numerous stenosis

Procedia PDF Downloads 87
1453 Sensing of Cancer DNA Using Resonance Frequency

Authors: Sungsoo Na, Chanho Park

Abstract:

Lung cancer is one of the most common severe diseases driving to the death of a human. Lung cancer can be divided into two cases of small-cell lung cancer (SCLC) and non-SCLC (NSCLC), and about 80% of lung cancers belong to the case of NSCLC. From several studies, the correlation between epidermal growth factor receptor (EGFR) and NSCLCs has been investigated. Therefore, EGFR inhibitor drugs such as gefitinib and erlotinib have been used as lung cancer treatments. However, the treatments result showed low response (10~20%) in clinical trials due to EGFR mutations that cause the drug resistance. Patients with resistance to EGFR inhibitor drugs usually are positive to KRAS mutation. Therefore, assessment of EGFR and KRAS mutation is essential for target therapies of NSCLC patient. In order to overcome the limitation of conventional therapies, overall EGFR and KRAS mutations have to be monitored. In this work, the only detection of EGFR will be presented. A variety of techniques has been presented for the detection of EGFR mutations. The standard detection method of EGFR mutation in ctDNA relies on real-time polymerase chain reaction (PCR). Real-time PCR method provides high sensitive detection performance. However, as the amplification step increases cost effect and complexity increase as well. Other types of technology such as BEAMing, next generation sequencing (NGS), an electrochemical sensor and silicon nanowire field-effect transistor have been presented. However, those technologies have limitations of low sensitivity, high cost and complexity of data analyzation. In this report, we propose a label-free and high-sensitive detection method of lung cancer using quartz crystal microbalance based platform. The proposed platform is able to sense lung cancer mutant DNA with a limit of detection of 1nM.

Keywords: cancer DNA, resonance frequency, quartz crystal microbalance, lung cancer

Procedia PDF Downloads 221
1452 The Design and Implementation of an Enhanced 2D Mesh Switch

Authors: Manel Langar, Riad Bourguiba, Jaouhar Mouine

Abstract:

In this paper, we propose the design and implementation of an enhanced wormhole virtual channel on chip router. It is a heart of a mesh NoC using the XY deterministic routing algorithm. It is characterized by its simple virtual channel allocation strategy which allows reducing area and complexity of connections without affecting the performance. We implemented our router on a Tezzaron process to validate its performances. This router is a basic element that will be used later to design a 3D mesh NoC.

Keywords: NoC, mesh, router, 3D NoC

Procedia PDF Downloads 554
1451 A 0-1 Goal Programming Approach to Optimize the Layout of Hospital Units: A Case Study in an Emergency Department in Seoul

Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee

Abstract:

This paper proposes a method to optimize the layout of an emergency department (ED) based on real executions of care processes by considering several planning objectives simultaneously. Recently, demand for healthcare services has been dramatically increased. As the demand for healthcare services increases, so do the need for new healthcare buildings as well as the need for redesign and renovating existing ones. The importance of implementation of a standard set of engineering facilities planning and design techniques has been already proved in both manufacturing and service industry with many significant functional efficiencies. However, high complexity of care processes remains a major challenge to apply these methods in healthcare environments. Process mining techniques applied in this study to tackle the problem of complexity and to enhance care process analysis. Process related information such as clinical pathways extracted from the information system of an ED. A 0-1 goal programming approach is then proposed to find a single layout that simultaneously satisfies several goals. The proposed model solved by optimization software CPLEX 12. The solution reached using the proposed method has 42.2% improvement in terms of walking distance of normal patients and 47.6% improvement in walking distance of critical patients at minimum cost of relocation. It has been observed that lots of patients must unnecessarily walk long distances during their visit to the emergency department because of an inefficient design. A carefully designed layout can significantly decrease patient walking distance and related complications.

Keywords: healthcare operation management, goal programming, facility layout problem, process mining, clinical processes

Procedia PDF Downloads 275
1450 Model-Based Approach as Support for Product Industrialization: Application to an Optical Sensor

Authors: Frederic Schenker, Jonathan J. Hendriks, Gianluca Nicchiotti

Abstract:

In a product industrialization perspective, the end-product shall always be at the peak of technological advancement and developed in the shortest time possible. Thus, the constant growth of complexity and a shorter time-to-market calls for important changes on both the technical and business level. Undeniably, the common understanding of the system is beclouded by its complexity which leads to the communication gap between the engineers and the sale department. This communication link is therefore important to maintain and increase the information exchange between departments to ensure a punctual and flawless delivery to the end customer. This evolution brings engineers to reason with more hindsight and plan ahead. In this sense, they use new viewpoints to represent the data and to express the model deliverables in an understandable way that the different stakeholder may identify their needs and ideas. This article focuses on the usage of Model-Based System Engineering (MBSE) in a perspective of system industrialization and reconnect the engineering with the sales team. The modeling method used and presented in this paper concentrates on displaying as closely as possible the needs of the customer. Firstly, by providing a technical solution to the sales team to help them elaborate commercial offers without omitting technicalities. Secondly, the model simulates between a vast number of possibilities across a wide range of components. It becomes a dynamic tool for powerful analysis and optimizations. Thus, the model is no longer a technical tool for the engineers, but a way to maintain and solidify the communication between departments using different views of the model. The MBSE contribution to cost optimization during New Product Introduction (NPI) activities is made explicit through the illustration of a case study describing the support provided by system models to architectural choices during the industrialization of a novel optical sensor.

Keywords: analytical model, architecture comparison, MBSE, product industrialization, SysML, system thinking

Procedia PDF Downloads 144
1449 A Refined Nonlocal Strain Gradient Theory for Assessing Scaling-Dependent Vibration Behavior of Microbeams

Authors: Xiaobai Li, Li Li, Yujin Hu, Weiming Deng, Zhe Ding

Abstract:

A size-dependent Euler–Bernoulli beam model, which accounts for nonlocal stress field, strain gradient field and higher order inertia force field, is derived based on the nonlocal strain gradient theory considering velocity gradient effect. The governing equations and boundary conditions are derived both in dimensional and dimensionless form by employed the Hamilton principle. The analytical solutions based on different continuum theories are compared. The effect of higher order inertia terms is extremely significant in high frequency range. It is found that there exists an asymptotic frequency for the proposed beam model, while for the nonlocal strain gradient theory the solutions diverge. The effect of strain gradient field in thickness direction is significant in low frequencies domain and it cannot be neglected when the material strain length scale parameter is considerable with beam thickness. The influence of each of three size effect parameters on the natural frequencies are investigated. The natural frequencies increase with the increasing material strain gradient length scale parameter or decreasing velocity gradient length scale parameter and nonlocal parameter.

Keywords: Euler-Bernoulli Beams, free vibration, higher order inertia, Nonlocal Strain Gradient Theory, velocity gradient

Procedia PDF Downloads 258
1448 The Process of Crisis: Model of Its Development in the Organization

Authors: M. Mikušová

Abstract:

The main aim of this paper is to present a clear and comprehensive picture of the process of a crisis in the organization which will help to better understand its possible developments. For a description of the sequence of individual steps and an indication of their causation and possible variants of the developments, a detailed flow diagram with verbal comment is applied. For simplicity, the process of the crisis is observed in four basic phases called: symptoms of the crisis, diagnosis, action and prevention. The model highlights the complexity of the phenomenon of the crisis and that the various phases of the crisis are interweaving.

Keywords: crisis, management, model, organization

Procedia PDF Downloads 281
1447 Demand-Oriented Supplier Integration in Agile New Product Development Projects

Authors: Guenther Schuh, Stephan Schroeder, Marcel Faulhaber

Abstract:

Companies are facing an increasing pressure to innovate faster, cheaper and more radical in last years, due to shrinking product lifecycles and higher volatility of markets and customer demands. Especially established companies struggle meeting those demands. Thus, many producing companies are adapting their development processes to address this increasing pressure. One approach taken by many companies is the use of agile, highly iterative development processes to reduce development times and costs as well as to increase the fulfilled customer requirements and the realized level of innovation. At the same time decreasing depths of added value and increasing focus on core competencies as well as a growing product complexity result in a high dependency on suppliers and external development partners during the product development. Thus, a successful introduction of agile development methods into the development of physical products requires also a successful integration of the necessary external partners and suppliers into the new processes and procedures and an adaption of the organizational interfaces to external partners according to the new circumstances and requirements of agile development processes. For an effective and efficient product development, the design of customer-supplier-relationships should be demand-oriented. A significant influence on the required design has the characteristics of the procurement object. Examples therefore are the complexity of technical interfaces between supply object and final product or the importance of the supplied component for the major product functionalities. Thus, this paper presents an approach to derive general requirements on the design of supplier integration according to the characteristics of supply objects. First, therefore the most relevant evaluation criteria and characteristics have been identified based on a thorough literature review. Subsequently the resulting requirements on the design of the supplier integration were derived depending on the different possible values of these criteria.

Keywords: iterative development processes, agile new product development, procurement, supplier integration

Procedia PDF Downloads 164
1446 Linking Temporal Changes of Climate Factors with Staple Cereal Yields in Southern Burkina Faso

Authors: Pius Borona, Cheikh Mbow, Issa Ouedraogo

Abstract:

In the Sahel, climate variability has been associated with a complex web of direct and indirect impacts. This natural phenomenon has been an impediment to agro-pastoral communities who experience uncertainty while involving in farming activities which is also their key source of livelihood. In this scenario, the role of climate variability in influencing the performance, quantity and quality of staple cereals yields, vital for food and nutrition security has been a topic of importance. This response of crops and subsequent yield variability is also a subject of immense debate due to the complexity of crop development at different stages. This complexity is further compounded by influence of slowly changing non-climatic factors. With these challenges in mind, the present paper initially explores the occurrence of climate variability at an inter annual and inter decadal level in South Burkina Faso. This is evidenced by variation of the total annual rainfall and the number of rainy days among other climatic descriptors. Further, it is shown how district-scale cereal yields in the study area including maize, sorghum and millet casually associate variably to the inter-annual variation of selected climate variables. Statistical models show that the three cereals widely depict sensitivity to the length of the growing period and total dry days in the growing season. Maize yields on the other hand relate strongly to the rainfall amount variation (R2=51.8%) showing high moisture dependence during critical growth stages. Our conclusions emphasize on adoption of efficient water utilization platforms especially those that have evidently increased yields and strengthening of forecasts dissemination.

Keywords: climate variability, cereal yields, seasonality, rain fed farming, Burkina Faso, rainfall

Procedia PDF Downloads 191
1445 A Framework for Assessing and Implementing Ecological-Based Adaptation Solutions in Urban Areas of Shanghai

Authors: Xin Li

Abstract:

The uncertainty and the complexity of the urban environment combining with the threat of climate change are contributing factors to the vulnerability in multiple-dimensions in Chinese megacities, especially in Shanghai. The urban area occupied high valuable technological infrastructure and density buildings is under the threats of climate change and can provide insufficient ecological service to remain the trade-off on urban sustainable development. Urban ecological-based adaptation (UEbA) combines practices and theoretical work and integrates ecological services into multiple-layers of urban environment planning in order to reduce the impact of the complexity and uncertainty. To understand and to respond to the challenges in the urban level, this paper considers Shanghai as the research objective. It is necessary that its urban adaptation strategies should be reflected and contain the concept and knowledge of EbA. In this paper, we firstly use software to illustrates the visualizing patterns and trends of UEBA research in the current 10 years. Specifically, Citespace software was used for interpreting the significant hubs, landmarks points of peer-reviewed literature on the context of ecological service research in recent 10 years. Secondly, 135 evidence-based EbA literature were reviewed for categorizing the methodologies and framework of evidence-based EbA by the systematic map protocol. Finally, a conceptual framework combined with culture, economic and social components was developed in order to assess the current adaptation strategies in Shanghai. This research founds that the key to reducing urban vulnerability does not only focus on co-benefit arguments but also should pay more attention to the concept of trade-off. This research concludes that the designed framework can provide key knowledge and indicates the essential gap as a valuable tool against climate variability in the process of urban adaptation in Shanghai.

Keywords: urban ecological-based adaptation, climate change, sustainable development, climate variability

Procedia PDF Downloads 146
1444 A Hybrid Classical-Quantum Algorithm for Boundary Integral Equations of Scattering Theory

Authors: Damir Latypov

Abstract:

A hybrid classical-quantum algorithm to solve boundary integral equations (BIE) arising in problems of electromagnetic and acoustic scattering is proposed. The quantum speed-up is due to a Quantum Linear System Algorithm (QLSA). The original QLSA of Harrow et al. provides an exponential speed-up over the best-known classical algorithms but only in the case of sparse systems. Due to the non-local nature of integral operators, matrices arising from discretization of BIEs, are, however, dense. A QLSA for dense matrices was introduced in 2017. Its runtime as function of the system's size N is bounded by O(√Npolylog(N)). The run time of the best-known classical algorithm for an arbitrary dense matrix scales as O(N².³⁷³). Instead of exponential as in case of sparse matrices, here we have only a polynomial speed-up. Nevertheless, sufficiently high power of this polynomial, ~4.7, should make QLSA an appealing alternative. Unfortunately for the QLSA, the asymptotic separability of the Green's function leads to high compressibility of the BIEs matrices. Classical fast algorithms such as Multilevel Fast Multipole Method (MLFMM) take advantage of this fact and reduce the runtime to O(Nlog(N)), i.e., the QLSA is only quadratically faster than the MLFMM. To be truly impactful for computational electromagnetics and acoustics engineers, QLSA must provide more substantial advantage than that. We propose a computational scheme which combines elements of the classical fast algorithms with the QLSA to achieve the required performance.

Keywords: quantum linear system algorithm, boundary integral equations, dense matrices, electromagnetic scattering theory

Procedia PDF Downloads 139
1443 Black-Hole Dimension: A Distinct Methodology of Understanding Time, Space and Data in Architecture

Authors: Alp Arda

Abstract:

Inspired by Nolan's ‘Interstellar’, this paper delves into speculative architecture, asking, ‘What if an architect could traverse time to study a city?’ It unveils the ‘Black-Hole Dimension,’ a groundbreaking concept that redefines urban identities beyond traditional boundaries. Moving past linear time narratives, this approach draws from the gravitational dynamics of black holes to enrich our understanding of urban and architectural progress. By envisioning cities and structures as influenced by black hole-like forces, it enables an in-depth examination of their evolution through time and space. The Black-Hole Dimension promotes a temporal exploration of architecture, treating spaces as narratives of their current state interwoven with historical layers. It advocates for viewing architectural development as a continuous, interconnected journey molded by cultural, economic, and technological shifts. This approach not only deepens our understanding of urban evolution but also empowers architects and urban planners to create designs that are both adaptable and resilient. Echoing themes from popular culture and science fiction, this methodology integrates the captivating dynamics of time and space into architectural analysis, challenging established design conventions. The Black-Hole Dimension champions a philosophy that welcomes unpredictability and complexity, thereby fostering innovation in design. In essence, the Black-Hole Dimension revolutionizes architectural thought by emphasizing space-time as a fundamental dimension. It reimagines our built environments as vibrant, evolving entities shaped by the relentless forces of time, space, and data. This groundbreaking approach heralds a future in architecture where the complexity of reality is acknowledged and embraced, leading to the creation of spaces that are both responsive to their temporal context and resilient against the unfolding tapestry of time.

Keywords: black-hole, timeline, urbanism, space and time, speculative architecture

Procedia PDF Downloads 53