Search results for: robustness
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 565

Search results for: robustness

85 Maker Education as Means for Early Entrepreneurial Education: Evaluation Results from a European Pilot Action

Authors: Elisabeth Unterfrauner, Christian Voigt

Abstract:

Since the foundation of the first Fab Lab by the Massachusetts Institute of Technology about 17 years ago, the Maker movement has spread globally with the foundation of maker spaces and Fab Labs worldwide. In these workshops, citizens have access to digital fabrication technologies such as 3D printers and laser cutters to develop and test their own ideas and prototypes, which makes it an attractive place for start-up companies. Know-How is shared not only in the physical space but also online in diverse communities. According to the Horizon report, the Maker movement, however, will also have an impact on educational settings in the following years. The European project ‘DOIT - Entrepreneurial skills for young social innovators in an open digital world’ has incorporated key elements of making to develop an early entrepreneurial education program for children between the age of six and 16. The Maker pedagogy builds on constructive learning approaches, learning by doing principles, learning in collaborative and interdisciplinary teams and learning through trial and error where mistakes are acknowledged as learning opportunities. The DOIT program consists of seven consecutive elements. It starts with a motivation phase where students get motivated by envisioning the scope of their possibilities. The second step is about Co-design: Students are asked to collect and select potential ideas for innovations. In the Co-creation phase students gather in teams and develop first prototypes of their ideas. In the iteration phase, the prototype is continuously improved and in the next step, in the reflection phase, feedback on the prototypes is exchanged between the teams. In the last two steps, scaling and reaching out, the robustness of the prototype is tested with a bigger group of users outside of the educational setting and finally students will share their projects with a wider public. The DOIT program involves 1,000 children in two pilot phases at 11 pilot sites in ten different European countries. The comprehensive evaluation design is based on a mixed method approach with a theoretical backbone on Lackeus’ model of entrepreneurship education, which distinguishes between entrepreneurial attitudes, entrepreneurial skills and entrepreneurial knowledge. A pre-post-test with quantitative measures as well as qualitative data from interviews with facilitators, students and workshop protocols will reveal the effectiveness of the program. The evaluation results will be presented at the conference.

Keywords: early entrepreneurial education, Fab Lab, maker education, Maker movement

Procedia PDF Downloads 133
84 Internal Financing Constraints and Corporate Investment: Evidence from Indian Manufacturing Firms

Authors: Gaurav Gupta, Jitendra Mahakud

Abstract:

This study focuses on the significance of internal financing constraints on the determination of corporate fixed investments in the case of Indian manufacturing companies. Financing constraints companies which have less internal fund or retained earnings face more transaction and borrowing costs due to imperfections in the capital market. The period of study is 1999-2000 to 2013-2014 and we consider 618 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test, and Hausman test results conclude the suitability of the fixed effect model for the estimation. The cash flow and liquidity of the company have been used as the proxies for the internal financial constraints. In accordance with various theories of corporate investments, we consider other firm specific variable like firm age, firm size, profitability, sales and leverage as the control variables in the model. From the econometric analysis, we find internal cash flow and liquidity have the significant and positive impact on the corporate investments. The variables like cost of capital, sales growth and growth opportunities are found to be significantly determining the corporate investments in India, which is consistent with the neoclassical, accelerator and Tobin’s q theory of corporate investment. To check the robustness of results, we divided the sample on the basis of cash flow and liquidity. Firms having cash flow greater than zero are put under one group, and firms with cash flow less than zero are put under another group. Also, the firms are divided on the basis of liquidity following the same approach. We find that the results are robust to both types of companies having positive and negative cash flow and liquidity. The results for other variables are also in the same line as we find for the whole sample. These findings confirm that internal financing constraints play a significant role for determination of corporate investment in India. The findings of this study have the implications for the corporate managers to focus on the projects having higher expected cash inflows to avoid the financing constraints. Apart from that, they should also maintain adequate liquidity to minimize the external financing costs.

Keywords: cash flow, corporate investment, financing constraints, panel data method

Procedia PDF Downloads 241
83 Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Authors: Arindam Chaudhuri

Abstract:

Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.

Keywords: FRSVM, Hadoop, MapReduce, PFRSVM

Procedia PDF Downloads 490
82 Part Variation Simulations: An Industrial Case Study with an Experimental Validation

Authors: Narendra Akhadkar, Silvestre Cano, Christophe Gourru

Abstract:

Injection-molded parts are widely used in power system protection products. One of the biggest challenges in an injection molding process is shrinkage and warpage of the molded parts. All these geometrical variations may have an adverse effect on the quality of the product, functionality, cost, and time-to-market. The situation becomes more challenging in the case of intricate shapes and in mass production using multi-cavity tools. To control the effects of shrinkage and warpage, it is very important to correctly find out the input parameters that could affect the product performance. With the advances in the computer-aided engineering (CAE), different tools are available to simulate the injection molding process. For our case study, we used the MoldFlow insight tool. Our aim is to predict the spread of the functional dimensions and geometrical variations on the part due to variations in the input parameters such as material viscosity, packing pressure, mold temperature, melt temperature, and injection speed. The input parameters may vary during batch production or due to variations in the machine process settings. To perform the accurate product assembly variation simulation, the first step is to perform an individual part variation simulation to render realistic tolerance ranges. In this article, we present a method to simulate part variations coming from the input parameters variation during batch production. The method is based on computer simulations and experimental validation using the full factorial design of experiments (DoE). The robustness of the simulation model is verified through input parameter wise sensitivity analysis study performed using simulations and experiments; all the results show a very good correlation in the material flow direction. There exists a non-linear interaction between material and the input process variables. It is observed that the parameters such as packing pressure, material, and mold temperature play an important role in spread on functional dimensions and geometrical variations. This method will allow us in the future to develop accurate/realistic virtual prototypes based on trusted simulated process variation and, therefore, increase the product quality and potentially decrease the time to market.

Keywords: correlation, molding process, tolerance, sensitivity analysis, variation simulation

Procedia PDF Downloads 178
81 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 125
80 An Examination of Earnings Management by Publicly Listed Targets Ahead of Mergers and Acquisitions

Authors: T. Elrazaz

Abstract:

This paper examines accrual and real earnings management by publicly listed targets around mergers and acquisitions. Prior literature shows that earnings management around mergers and acquisitions can have a significant economic impact because of the associated wealth transfers among stakeholders. More importantly, acting on behalf of their shareholders or pursuing their self-interests, managers of both targets and acquirers may be equally motivated to manipulate earnings prior to an acquisition to generate higher gains for their shareholders or themselves. Building on the grounds of information asymmetry, agency conflicts, stewardship theory, and the revelation principle, this study addresses the question of whether takeover targets employ accrual and real earnings management in the periods prior to the announcement of Mergers and Acquisitions (M&A). Additionally, this study examines whether acquirers are able to detect targets’ earnings management, and in response, adjust the acquisition premium paid in order not to face the risk of overpayment. This study uses an aggregate accruals approach in estimating accrual earnings management as proxied by estimated abnormal accruals. Additionally, real earnings management is proxied for by employing widely used models in accounting and finance literature. The results of this study indicate that takeover targets manipulate their earnings using accruals in the second year with an earnings release prior to the announcement of the M&A. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals. These results are consistent with the argument that targets of cash-only or mixed-payment deals do not have the same strong motivations to manage their earnings as their stock-financed deals counterparts do additionally supporting the findings of prior studies that the method of payment in takeovers is value relevant. The findings of this study also indicate that takeover targets manipulate earnings upwards through cutting discretionary expenses the year prior to the acquisition while they do not do so by manipulating sales or production costs. Moreover, in partitioning the sample of targets according to the method of payment used in the deal, the results are restricted only to targets of stock-financed deals, providing further robustness to the results derived under the accrual-based models. Finally, this study finds evidence suggesting that acquirers are fully aware of the accrual-based techniques employed by takeover targets and can unveil such manipulation practices. These results are robust to alternative accrual and real earnings management proxies, as well as controlling for the method of payment in the deal.

Keywords: accrual earnings management, acquisition premium, real earnings management, takeover targets

Procedia PDF Downloads 116
79 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 112
78 The Trade Flow of Small Association Agreements When Rules of Origin Are Relaxed

Authors: Esmat Kamel

Abstract:

This paper aims to shed light on the extent to which the Agadir Association agreement has fostered inter regional trade between the E.U_26 and the Agadir_4 countries; once that we control for the evolution of Agadir agreement’s exports to the rest of the world. The next valid question will be regarding any remarkable variation in the spatial/sectoral structure of exports, and to what extent has it been induced by the Agadir agreement itself and precisely after the adoption of rules of origin and the PANEURO diagonal cumulative scheme? The paper’s empirical dataset covering a timeframe from [2000 -2009] was designed to account for sector specific export and intermediate flows and the bilateral structured gravity model was custom tailored to capture sector and regime specific rules of origin and the Poisson Pseudo Maximum Likelihood Estimator was used to calculate the gravity equation. The methodological approach of this work is considered to be a threefold one which starts first by conducting a ‘Hierarchal Cluster Analysis’ to classify final export flows showing a certain degree of linkage between each other. The analysis resulted in three main sectoral clusters of exports between Agadir_4 and E.U_26: cluster 1 for Petrochemical related sectors, cluster 2 durable goods and finally cluster 3 for heavy duty machinery and spare parts sectors. Second step continues by taking export flows resulting from the 3 clusters to be subject to treatment with diagonal Rules of origin through ‘The Double Differences Approach’, versus an equally comparable untreated control group. Third step is to verify results through a robustness check applied by ‘Propensity Score Matching’ to validate that the same sectoral final export and intermediate flows increased when rules of origin were relaxed. Through all the previous analysis, a remarkable and partial significance of the interaction term combining both treatment effects and time for the coefficients of 13 out of the 17 covered sectors turned out to be partially significant and it further asserted that treatment with diagonal rules of origin contributed in increasing Agadir’s_4 final and intermediate exports to the E.U._26 on average by 335% and in changing Agadir_4 exports structure and composition to the E.U._26 countries.

Keywords: agadir association agreement, structured gravity model, hierarchal cluster analysis, double differences estimation, propensity score matching, diagonal and relaxed rules of origin

Procedia PDF Downloads 319
77 The Impact of Monetary Policy on Aggregate Market Liquidity: Evidence from Indian Stock Market

Authors: Byomakesh Debata, Jitendra Mahakud

Abstract:

The recent financial crisis has been characterized by massive monetary policy interventions by the Central bank, and it has amplified the importance of liquidity for the stability of the stock market. This paper empirically elucidates the actual impact of monetary policy interventions on stock market liquidity covering all National Stock Exchange (NSE) Stocks, which have been traded continuously from 2002 to 2015. The present study employs a multivariate VAR model along with VAR-granger causality test, impulse response functions, block exogeneity test, and variance decomposition to analyze the direction as well as the magnitude of the relationship between monetary policy and market liquidity. Our analysis posits a unidirectional relationship between monetary policy (call money rate, base money growth rate) and aggregate market liquidity (traded value, turnover ratio, Amihud illiquidity ratio, turnover price impact, high-low spread). The impulse response function analysis clearly depicts the influence of monetary policy on stock liquidity for every unit innovation in monetary policy variables. Our results suggest that an expansionary monetary policy increases aggregate stock market liquidity and the reverse is documented during the tightening of monetary policy. To ascertain whether our findings are consistent across all periods, we divided the period of study as pre-crisis (2002 to 2007) and post-crisis period (2007-2015) and ran the same set of models. Interestingly, all liquidity variables are highly significant in the post-crisis period. However, the pre-crisis period has witnessed a moderate predictability of monetary policy. To check the robustness of our results we ran the same set of VAR models with different monetary policy variables and found the similar results. Unlike previous studies, we found most of the liquidity variables are significant throughout the sample period. This reveals the predictability of monetary policy on aggregate market liquidity. This study contributes to the existing body of literature by documenting a strong predictability of monetary policy on stock liquidity in an emerging economy with an order driven market making system like India. Most of the previous studies have been carried out in developing economies with quote driven or hybrid market making system and their results are ambiguous across different periods. From an eclectic sense, this study may be considered as a baseline study to further find out the macroeconomic determinants of liquidity of stocks at individual as well as aggregate level.

Keywords: market liquidity, monetary policy, order driven market, VAR, vector autoregressive model

Procedia PDF Downloads 375
76 Stability Indicating RP – HPLC Method Development, Validation and Kinetic Study for Amiloride Hydrochloride and Furosemide in Pharmaceutical Dosage Form

Authors: Jignasha Derasari, Patel Krishna M, Modi Jignasa G.

Abstract:

Chemical stability of pharmaceutical molecules is a matter of great concern as it affects the safety and efficacy of the drug product.Stability testing data provides the basis to understand how the quality of a drug substance and drug product changes with time under the influence of various environmental factors. Besides this, it also helps in selecting proper formulation and package as well as providing proper storage conditions and shelf life, which is essential for regulatory documentation. The ICH guideline states that stress testing is intended to identify the likely degradation products which further help in determination of the intrinsic stability of the molecule and establishing degradation pathways, and to validate the stability indicating procedures. A simple, accurate and precise stability indicating RP- HPLC method was developed and validated for simultaneous estimation of Amiloride Hydrochloride and Furosemide in tablet dosage form. Separation was achieved on an Phenomenexluna ODS C18 (250 mm × 4.6 mm i.d., 5 µm particle size) by using a mobile phase consisting of Ortho phosphoric acid: Acetonitrile (50:50 %v/v) at a flow rate of 1.0 ml/min (pH 3.5 adjusted with 0.1 % TEA in Water) isocratic pump mode, Injection volume 20 µl and wavelength of detection was kept at 283 nm. Retention time for Amiloride Hydrochloride and Furosemide was 1.810 min and 4.269 min respectively. Linearity of the proposed method was obtained in the range of 40-60 µg/ml and 320-480 µg/ml and Correlation coefficient was 0.999 and 0.998 for Amiloride hydrochloride and Furosemide, respectively. Forced degradation study was carried out on combined dosage form with various stress conditions like hydrolysis (acid and base hydrolysis), oxidative and thermal conditions as per ICH guideline Q2 (R1). The RP- HPLC method has shown an adequate separation for Amiloride hydrochloride and Furosemide from its degradation products. Proposed method was validated as per ICH guidelines for specificity, linearity, accuracy; precision and robustness for estimation of Amiloride hydrochloride and Furosemide in commercially available tablet dosage form and results were found to be satisfactory and significant. The developed and validated stability indicating RP-HPLC method can be used successfully for marketed formulations. Forced degradation studies help in generating degradants in much shorter span of time, mostly a few weeks can be used to develop the stability indicating method which can be applied later for the analysis of samples generated from accelerated and long term stability studies. Further, kinetic study was also performed for different forced degradation parameters of the same combination, which help in determining order of reaction.

Keywords: amiloride hydrochloride, furosemide, kinetic study, stability indicating RP-HPLC method validation

Procedia PDF Downloads 465
75 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves

Authors: Shengnan Chen, Shuhua Wang

Abstract:

Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.

Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves

Procedia PDF Downloads 283
74 Pressure-Robust Approximation for the Rotational Fluid Flow Problems

Authors: Medine Demir, Volker John

Abstract:

Fluid equations in a rotating frame of reference have a broad class of important applications in meteorology and oceanography, especially in the large-scale flows considered in ocean and atmosphere, as well as many physical and industrial applications. The Coriolis and the centripetal forces, resulting from the rotation of the earth, play a crucial role in such systems. For such applications it may be required to solve the system in complex three-dimensional geometries. In recent years, the Navier--Stokes equations in a rotating frame have been investigated in a number of papers using the classical inf-sup stable mixed methods, like Taylor-Hood pairs, to contribute to the analysis and the accurate and efficient numerical simulation. Numerical analysis reveals that these classical methods introduce a pressure-dependent contribution in the velocity error bounds that is proportional to some inverse power of the viscosity. Hence, these methods are optimally convergent but small velocity errors might not be achieved for complicated pressures and small viscosity coefficients. Several approaches have been proposed for improving the pressure-robustness of pairs of finite element spaces. In this contribution, a pressure-robust space discretization of the incompressible Navier--Stokes equations in a rotating frame of reference is considered. The discretization employs divergence-free, $H^1$-conforming mixed finite element methods like Scott--Vogelius pairs. However, this approach might come with a modification of the meshes, like the use of barycentric-refined grids in case of Scott--Vogelius pairs. However, this strategy requires the finite element code to have control on the mesh generator which is not realistic in many engineering applications and might also be in conflict with the solver for the linear system. An error estimate for the velocity is derived that tracks the dependency of the error bound on the coefficients of the problem, in particular on the angular velocity. Numerical examples illustrate the theoretical results. The idea of pressure-robust method could be cast on different types of flow problems which would be considered as future studies. As another future research direction, to avoid a modification of the mesh, one may use a very simple parameter-dependent modification of the Scott-Vogelius element, the pressure-wired Stokes element, such that the inf-sup constant is independent of nearly-singular vertices.

Keywords: navier-stokes equations in a rotating frame of refence, coriolis force, pressure-robust error estimate, scott-vogelius pairs of finite element spaces

Procedia PDF Downloads 67
73 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.

Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes

Procedia PDF Downloads 43
72 Development and Validation of a Green Analytical Method for the Analysis of Daptomycin Injectable by Fourier-Transform Infrared Spectroscopy (FTIR)

Authors: Eliane G. Tótoli, Hérida Regina N. Salgado

Abstract:

Daptomycin is an important antimicrobial agent used in clinical practice nowadays, since it is very active against some Gram-positive bacteria that are particularly challenges for the medicine, such as methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococci (VRE). The importance of environmental preservation has receiving special attention since last years. Considering the evident need to protect the natural environment and the introduction of strict quality requirements regarding analytical procedures used in pharmaceutical analysis, the industries must seek environmentally friendly alternatives in relation to the analytical methods and other processes that they follow in their routine. In view of these factors, green analytical chemistry is prevalent and encouraged nowadays. In this context, infrared spectroscopy stands out. This is a method that does not use organic solvents and, although it is formally accepted for the identification of individual compounds, also allows the quantitation of substances. Considering that there are few green analytical methods described in literature for the analysis of daptomycin, the aim of this work was the development and validation of a green analytical method for the quantification of this drug in lyophilized powder for injectable solution, by Fourier-transform infrared spectroscopy (FT-IR). Method: Translucent potassium bromide pellets containing predetermined amounts of the drug were prepared and subjected to spectrophotometric analysis in the mid-infrared region. After obtaining the infrared spectrum and with the assistance of the IR Solution software, quantitative analysis was carried out in the spectral region between 1575 and 1700 cm-1, related to a carbonyl band of the daptomycin molecule, and this band had its height analyzed in terms of absorbance. The method was validated according to ICH guidelines regarding linearity, precision (repeatability and intermediate precision), accuracy and robustness. Results and discussion: The method showed to be linear (r = 0.9999), precise (RSD% < 2.0), accurate and robust, over a concentration range from 0.2 to 0.6 mg/pellet. In addition, this technique does not use organic solvents, which is one great advantage over the most common analytical methods. This fact contributes to minimize the generation of organic solvent waste by the industry and thereby reduces the impact of its activities on the environment. Conclusion: The validated method proved to be adequate to quantify daptomycin in lyophilized powder for injectable solution and can be used for its routine analysis in quality control. In addition, the proposed method is environmentally friendly, which is in line with the global trend.

Keywords: daptomycin, Fourier-transform infrared spectroscopy, green analytical chemistry, quality control, spectrometry in IR region

Procedia PDF Downloads 381
71 Forecasting Residential Water Consumption in Hamilton, New Zealand

Authors: Farnaz Farhangi

Abstract:

Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.

Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model

Procedia PDF Downloads 337
70 A Validated Estimation Method to Predict the Interior Wall of Residential Buildings Based on Easy to Collect Variables

Authors: B. Gepts, E. Meex, E. Nuyts, E. Knaepen, G. Verbeeck

Abstract:

The importance of resource efficiency and environmental impact assessment has raised the interest in knowing the amount of materials used in buildings. If no BIM model or energy performance certificate is available, material quantities can be obtained through an estimation or time-consuming calculation. For the interior wall area, no validated estimation method exists. However, in the case of environmental impact assessment or evaluating the existing building stock as future material banks, knowledge of the material quantities used in interior walls is indispensable. This paper presents a validated method for the estimation of the interior wall area for dwellings based on easy-to-collect building characteristics. A database of 4963 residential buildings spread all over Belgium is used. The data are collected through onsite measurements of the buildings during the construction phase (between mid-2010 and mid-2017). The interior wall area refers to the area of all interior walls in the building, including the inner leaf of exterior (party) walls, minus the area of windows and doors, unless mentioned otherwise. The two predictive modelling techniques used are 1) a (stepwise) linear regression and 2) a decision tree. The best estimation method is selected based on the best R² k-fold (5) fit. The research shows that the building volume is by far the most important variable to estimate the interior wall area. A stepwise regression based on building volume per building, building typology, and type of house provides the best fit, with R² k-fold (5) = 0.88. Although the best R² k-fold value is obtained when the other parameters ‘building typology’ and ‘type of house’ are included, the contribution of these variables can be seen as statistically significant but practically irrelevant. Thus, if these parameters are not available, a simplified estimation method based on only the volume of the building can also be applied (R² k-fold = 0.87). The robustness and precision of the method (output) are validated three times. Firstly, the prediction of the interior wall area is checked by means of alternative calculations of the building volume and of the interior wall area; thus, other definitions are applied to the same data. Secondly, the output is tested on an extension of the database, so it has the same definitions but on other data. Thirdly, the output is checked on an unrelated database with other definitions and other data. The validation of the estimation methods demonstrates that the methods remain accurate when underlying data are changed. The method can support environmental as well as economic dimensions of impact assessment, as it can be used in early design. As it allows the prediction of the amount of interior wall materials to be produced in the future or that might become available after demolition, the presented estimation method can be part of material flow analyses on input and on output.

Keywords: buildings as material banks, building stock, estimation method, interior wall area

Procedia PDF Downloads 32
69 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 136
68 Uses of Fibrinogen Concentrate in the Management of Trauma-Induced Coagulopathy in the Prehospital Environment: A Scoping Review

Authors: Nura Khattab, Fayad Al-Haimus, Teruko Kishibe, Netanel Krugliak, Melissa McGowan, Brodie Nolan

Abstract:

Trauma-induced coagulopathy remains a significant contributor to mortality in severely injured patients. Fibrinogen is essential for early hemostasis and is recognized as the first coagulation factor to fall below critical levels, compromising the coagulation cascade. Early administration of fibrinogen concentrate may be feasible and effective to prevent coagulopathy. We conducted this scoping review to characterize the existing quantity of literature, and to explore the usage of prehospital fibrinogen concentrate products in improving clinical outcomes in trauma patients. Methods: A search strategy was developed in consultation with an information specialist. We searched MEDLINE, Embase, Cochrane, and Scopus from inception to May 6th 2024. English studies evaluating prehospital/military usage of fibrinogen concentrate in trauma patients were included. Studies were assessed by three independent reviewers for meeting inclusion and exclusion criteria. Reference lists of included articles were reviewed to identify additional studies meeting inclusion criteria. Clinical endpoints regarding fibrinogen concentrate were extracted and synthesized. Results: The literature search returned 1301 articles with seven studies meeting the inclusion criteria. Five studies (71%) were conducted in civilian settings and two studies (29%) were conducted in military settings. Of the included studies, three (43%) utilized a randomized control trial. We identified seven outcomes that compared varying concentrations of fibrinogen or fibrinogen concentrate to a placebo group. The outcomes included overall mortality, death from hemorrhage, thromboembolic events, clotting time, maximum clot firmness, clot stability at ER admission, and fibrinogen concentration at ER admission. Apart from thromboembolic events, all other reported outcomes showed statistically significant differences in group comparisons, determined using p values. The four (57%) non-clinical studies underscored the robustness, practicality, and degree of fibrinogen concentrate utilization in military environments and retrieval services. Conclusion: Preliminary research suggests that prehospital fibrinogen concentrate administration in traumatic bleeding patients is both feasible and effective, improving mortality and clotting parameters. While implementing a time-saving and proactive approach with fibrinogen holds potential for enhancing trauma care, the current evidence is limited. Further studies in this novel field are warranted.

Keywords: fibrinogen concentrate, prehospital, military, trauma, trauma-induced coagulopathy

Procedia PDF Downloads 25
67 Corporate Performance and Balance Sheet Indicators: Evidence from Indian Manufacturing Companies

Authors: Hussain Bohra, Pradyuman Sharma

Abstract:

This study highlights the significance of Balance Sheet Indicators on the corporate performance in the case of Indian manufacturing companies. Balance sheet indicators show the actual financial health of the company and it helps to the external investors to choose the right company for their investment and it also help to external financing agency to give easy finance to the manufacturing companies. The period of study is 2000 to 2014 for 813 manufacturing companies for which the continuous data is available throughout the study period. The data is collected from PROWESS data base maintained by Centre for Monitoring Indian Economy Pvt. Ltd. Panel data methods like fixed effect and random effect methods are used for the analysis. The Likelihood Ratio test, Lagrange Multiplier test and Hausman test results proof the suitability of the fixed effect model for the estimation. Return on assets (ROA) is used as the proxy to measure corporate performance. ROA is the best proxy to measure corporate performance as it already used by the most of the authors who worked on the corporate performance. ROA shows return on long term investment projects of firms. Different ratios like Current Ratio, Debt-equity ratio, Receivable turnover ratio, solvency ratio have been used as the proxies for the Balance Sheet Indicators. Other firm specific variable like firm size, and sales as the control variables in the model. From the empirical analysis, it was found that all selected financial ratios have significant and positive impact on the corporate performance. Firm sales and firm size also found significant and positive impact on the corporate performance. To check the robustness of results, the sample was divided on the basis of different ratio like firm having high debt equity ratio and low debt equity ratio, firms having high current ratio and low current ratio, firms having high receivable turnover and low receivable ratio and solvency ratio in the form of firms having high solving ratio and low solvency ratio. We find that the results are robust to all types of companies having different form of selected balance sheet indicators ratio. The results for other variables are also in the same line as for the whole sample. These findings confirm that Balance sheet indicators play as significant role on the corporate performance in India. The findings of this study have the implications for the corporate managers to focus different ratio to maintain the minimum expected level of performance. Apart from that, they should also maintain adequate sales and total assets to improve corporate performance.

Keywords: balance sheet, corporate performance, current ratio, panel data method

Procedia PDF Downloads 265
66 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model

Authors: T. Thein, S. Kalyar Myo

Abstract:

Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.

Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)

Procedia PDF Downloads 286
65 Drug Delivery Cationic Nano-Containers Based on Pseudo-Proteins

Authors: Sophio Kobauri, Temur Kantaria, Nina Kulikova, David Tugushi, Ramaz Katsarava

Abstract:

The elaboration of effective drug delivery vehicles is still topical nowadays since targeted drug delivery is one of the most important challenges of the modern nanomedicine. The last decade has witnessed enormous research focused on synthetic cationic polymers (CPs) due to their flexible properties, in particular as non-viral gene delivery systems, facile synthesis, robustness, not oncogenic and proven gene delivery efficiency. However, the toxicity is still an obstacle to the application in pharmacotherapy. For overcoming the problem, creation of new cationic compounds including the polymeric nano-size particles – nano-containers (NCs) loading with different pharmaceuticals and biologicals is still relevant. In this regard, a variety of NCs-based drug delivery systems have been developed. We have found that amino acid-based biodegradable polymers called as pseudo-proteins (PPs), which can be cleared from the body after the fulfillment of their function are highly suitable for designing pharmaceutical NCs. Among them, one of the most promising are NCs made of biodegradable Cationic PPs (CPPs). For preparing new cationic NCs (CNCs), we used CPPs composed of positively charged amino acid L-arginine (R). The CNCs were fabricated by two approaches using: (1) R-based homo-CPPs; (2) Blends of R-based CPPs with regular (neutral) PPs. According to the first approach NCs we prepared from CPPs 8R3 (composed of R, sebacic acid and 1,3-propanediol) and 8R6 (composed of R, sebacic acid and 1,6-hexanediol). The NCs prepared from these CPPs were 72-101 nm in size with zeta potential within +30 ÷ +35 mV at a concentration 6 mg/mL. According to the second approach, CPPs 8R6 was blended in organic phase with neutral PPs 8L6 (composed of leucine, sebacic acid and 1,6-hexanediol). The NCs prepared from the blends were 130-140 nm in size with zeta potential within +20 ÷ +28 mV depending on 8R6/8L6 ratio. The stability studies of fabricated NCs showed that no substantial change of the particle size and distribution and no big particles’ formation is observed after three months storage. In vitro biocompatibility study of the obtained NPs with four different stable cell lines: A549 (human), U-937 (human), RAW264.7 (murine), Hepa 1-6 (murine) showed both type cathionic NCs are biocompatible. The obtained data allow concluding that the obtained CNCs are promising for the application as biodegradable drug delivery vehicles. This work was supported by the joint grant from the Science and Technology Center in Ukraine and Shota Rustaveli National Science Foundation of Georgia #6298 'New biodegradable cationic polymers composed of arginine and spermine-versatile biomaterials for various biomedical applications'.

Keywords: biodegradable polymers, cationic pseudo-proteins, nano-containers, drug delivery vehicles

Procedia PDF Downloads 156
64 Evotrader: Bitcoin Trading Using Evolutionary Algorithms on Technical Analysis and Social Sentiment Data

Authors: Martin Pellon Consunji

Abstract:

Due to the rise in popularity of Bitcoin and other crypto assets as a store of wealth and speculative investment, there is an ever-growing demand for automated trading tools, such as bots, in order to gain an advantage over the market. Traditionally, trading in the stock market was done by professionals with years of training who understood patterns and exploited market opportunities in order to gain a profit. However, nowadays a larger portion of market participants are at minimum aided by market-data processing bots, which can generally generate more stable signals than the average human trader. The rise in trading bot usage can be accredited to the inherent advantages that bots have over humans in terms of processing large amounts of data, lack of emotions of fear or greed, and predicting market prices using past data and artificial intelligence, hence a growing number of approaches have been brought forward to tackle this task. However, the general limitation of these approaches can still be broken down to the fact that limited historical data doesn’t always determine the future, and that a lot of market participants are still human emotion-driven traders. Moreover, developing markets such as those of the cryptocurrency space have even less historical data to interpret than most other well-established markets. Due to this, some human traders have gone back to the tried-and-tested traditional technical analysis tools for exploiting market patterns and simplifying the broader spectrum of data that is involved in making market predictions. This paper proposes a method which uses neuro evolution techniques on both sentimental data and, the more traditionally human-consumed, technical analysis data in order to gain a more accurate forecast of future market behavior and account for the way both automated bots and human traders affect the market prices of Bitcoin and other cryptocurrencies. This study’s approach uses evolutionary algorithms to automatically develop increasingly improved populations of bots which, by using the latest inflows of market analysis and sentimental data, evolve to efficiently predict future market price movements. The effectiveness of the approach is validated by testing the system in a simulated historical trading scenario, a real Bitcoin market live trading scenario, and testing its robustness in other cryptocurrency and stock market scenarios. Experimental results during a 30-day period show that this method outperformed the buy and hold strategy by over 260% in terms of net profits, even when taking into consideration standard trading fees.

Keywords: neuro-evolution, Bitcoin, trading bots, artificial neural networks, technical analysis, evolutionary algorithms

Procedia PDF Downloads 123
63 Robust Inference with a Skew T Distribution

Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici

Abstract:

There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.

Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness

Procedia PDF Downloads 397
62 Role of Institutional Quality as a Key Determinant of FDI Flows in Developing Asian Economies

Authors: Bikash Ranjan Mishra, Lopamudra D. Satpathy

Abstract:

In the wake of the phenomenal surge in international business in the last decades or more, both the developed and developing economies around the world are in massive competition to attract more and more FDI flows. While the developed countries have marched ahead in the race, the developing countries, especially those of Asian economies, have followed them at a rapid pace. While most of the previous studies have analysed the role of institutional quality in the promotion of FDI flows in developing countries, very few studies have taken an integrated approach of examining the comprehensive impact of institutional quality, globalization pattern and domestic financial development on FDI flows. In this context, the paper contributes to the literature in two important ways. Firstly, two composite indices of institutional quality and domestic financial development for the Asian countries are constructed in comparison to earlier studies that resort to a single variable for indicating the institutional quality and domestic financial development. Secondly, the impact of these variables on FDI flows through their interaction with geographical region is investigated. The study uses panel data covering the time period of 1996 to 2012 by selecting twenty Asian developing countries by emphasizing the quality of institutions from the geographical regions of eastern, south-eastern, southern and western Asia. Control of corruption, better rule of law, regulatory quality, effectiveness of the government, political stability and voice and accountability are used as indicators of institutional quality. Besides these, the study takes into account the domestic credits in the hands of public, private sectors and in stock markets as domestic financial indicators. First in the specification of model, a factor analysis is performed to reduce the vast determinants, which are highly correlated with each other, to a manageable size. Afterwards, a reduced version of the model is estimated with the extracted factors in the form of index as independent variables along with a set of control variables. It is found that the institutional quality index and index of globalization exert a significant effect on FDI inflows of the host countries; in contrast, the domestic financial index does not seem to play much worthy role. Finally, some robustness tests are performed to make sure that the results are not sensitive to temporal and spatial unobserved heterogeneity. On the basis of the above study, one general inference can be drawn from the policy prescription point of view that the government of these developing countries should strengthen their domestic institution, both financial and non-financial. In addition to these, welfare policies should also target for rapid globalization. If the financial and non-financial institutions of these developing countries become sound and grow more globalized in the economic, social and political domain, then they can appeal to more amounts of FDI inflows that will subsequently result in advancement of these economies.

Keywords: Asian developing economies, FDI, institutional quality, panel data

Procedia PDF Downloads 314
61 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 140
60 From By-product To Brilliance: Transforming Adobe Brick Construction Using Meat Industry Waste-derived Glycoproteins

Authors: Amal Balila, Maria Vahdati

Abstract:

Earth is a green building material with very low embodied energy and almost zero greenhouse gas emissions. However, it lacks strength and durability in its natural state. By responsibly sourcing stabilisers, it's possible to enhance its strength. This research draws inspiration from the robustness of termite mounds, where termites incorporate glycoproteins from their saliva during construction. Biomimicry explores the potential of these termite stabilisers in producing bio-inspired adobe bricks. The meat industry generates significant waste during slaughter, including blood, skin, bones, tendons, gastrointestinal contents, and internal organs. While abundant, many meat by-products raise concerns regarding human consumption, religious orders, cultural and ethical beliefs, and also heavily contribute to environmental pollution. Extracting and utilising proteins from this waste is vital for reducing pollution and increasing profitability. Exploring the untapped potential of meat industry waste, this research investigates how glycoproteins could revolutionize adobe brick construction. Bovine serum albumin (BSA) from cows' blood and mucin from porcine stomachs were the chosen glycoproteins used as stabilisers for adobe brick production. Despite their wide usage across various fields, they have very limited utilisation in food processing. Thus, both were identified as potential stabilisers for adobe brick production in this study. Two soil types were utilised to prepare adobe bricks for testing, comparing controlled unstabilised bricks with glycoprotein-stabilised ones. All bricks underwent testing for unconfined compressive strength and erosion resistance. The primary finding of this study is the efficacy of BSA, a glycoprotein derived from cows' blood and a by-product of the beef industry, as an earth construction stabiliser. Adding 0.5% by weight of BSA resulted in a 17% and 41% increase in the unconfined compressive strength for British and Sudanese adobe bricks, respectively. Further, adding 5% by weight of BSA led to a 202% and 97% increase in the unconfined compressive strength for British and Sudanese adobe bricks, respectively. Moreover, using 0.1%, 0.2%, and 0.5% by weight of BSA resulted in erosion rate reductions of 30%, 48%, and 70% for British adobe bricks, respectively, with a 97% reduction observed for Sudanese adobe bricks at 0.5% by weight of BSA. However, mucin from the porcine stomach did not significantly improve the unconfined compressive strength of adobe bricks. Nevertheless, employing 0.1% and 0.2% by weight of mucin resulted in erosion rate reductions of 28% and 55% for British adobe bricks, respectively. These findings underscore BSA's efficiency as an earth construction stabiliser for wall construction and mucin's efficacy for wall render, showcasing their potential for sustainable and durable building practices.

Keywords: biomimicry, earth construction, industrial waste management, sustainable building materials, termite mounds.

Procedia PDF Downloads 51
59 Stimulation of Nerve Tissue Differentiation and Development Using Scaffold-Based Cell Culture in Bioreactors

Authors: Simon Grossemy, Peggy P. Y. Chan, Pauline M. Doran

Abstract:

Nerve tissue engineering is the main field of research aimed at finding an alternative to autografts as a treatment for nerve injuries. Scaffolds are used as a support to enhance nerve regeneration. In order to successfully design novel scaffolds and in vitro cell culture systems, a deep understanding of the factors affecting nerve regeneration processes is needed. Physical and biological parameters associated with the culture environment have been identified as potentially influential in nerve cell differentiation, including electrical stimulation, exposure to extracellular-matrix (ECM) proteins, dynamic medium conditions and co-culture with glial cells. The mechanisms involved in driving the cell to differentiation in the presence of these factors are poorly understood; the complexity of each of them raises the possibility that they may strongly influence each other. Some questions that arise in investigating nerve regeneration include: What are the best protein coatings to promote neural cell attachment? Is the scaffold design suitable for providing all the required factors combined? What is the influence of dynamic stimulation on cell viability and differentiation? In order to study these effects, scaffolds adaptable to bioreactor culture conditions were designed to allow electrical stimulation of cells exposed to ECM proteins, all within a dynamic medium environment. Gold coatings were used to make the surface of viscose rayon microfiber scaffolds (VRMS) conductive, and poly-L-lysine (PLL) and laminin (LN) surface coatings were used to mimic the ECM environment and allow the attachment of rat PC12 neural cells. The robustness of the coatings was analyzed by surface resistivity measurements, scanning electron microscope (SEM) observation and immunocytochemistry. Cell attachment to protein coatings of PLL, LN and PLL+LN was studied using DNA quantification with Hoechst. The double coating of PLL+LN was selected based on high levels of PC12 cell attachment and the reported advantages of laminin for neural differentiation. The underlying gold coatings were shown to be biocompatible using cell proliferation and live/dead staining assays. Coatings exhibiting stable properties over time under dynamic fluid conditions were developed; indeed, cell attachment and the conductive power of the scaffolds were maintained over 2 weeks of bioreactor operation. These scaffolds are promising research tools for understanding complex neural cell behavior. They have been used to investigate major factors in the physical culture environment that affect nerve cell viability and differentiation, including electrical stimulation, bioreactor hydrodynamic conditions, and combinations of these parameters. The cell and tissue differentiation response was evaluated using DNA quantification, immunocytochemistry, RT-qPCR and functional analyses.

Keywords: bioreactor, electrical stimulation, nerve differentiation, PC12 cells, scaffold

Procedia PDF Downloads 245
58 Identifying Biomarker Response Patterns to Vitamin D Supplementation in Type 2 Diabetes Using K-means Clustering: A Meta-Analytic Approach to Glycemic and Lipid Profile Modulation

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

Background and Aims: This meta-analysis aimed to evaluate the effect of vitamin D supplementation on key metabolic and cardiovascular parameters, such as glycated hemoglobin (HbA1C), fasting blood sugar (FBS), low-density lipoprotein (LDL), high-density lipoprotein (HDL), systolic blood pressure (SBP), and total vitamin D levels in patients with Type 2 diabetes mellitus (T2DM). Methods: A systematic search was performed across databases, including PubMed, Scopus, Embase, Web of Science, Cochrane Library, and ClinicalTrials.gov, from January 1990 to January 2024. A total of 4,177 relevant studies were initially identified. Using an unsupervised K-means clustering algorithm, publications were grouped based on common text features. Maximum entropy classification was then applied to filter studies that matched a pre-identified training set of 139 potentially relevant articles. These selected studies were manually screened for relevance. A parallel manual selection of all initially searched studies was conducted for validation. The final inclusion of studies was based on full-text evaluation, quality assessment, and meta-regression models using random effects. Sensitivity analysis and publication bias assessments were also performed to ensure robustness. Results: The unsupervised K-means clustering algorithm grouped the patients based on their responses to vitamin D supplementation, using key biomarkers such as HbA1C, FBS, LDL, HDL, SBP, and total vitamin D levels. Two primary clusters emerged: one representing patients who experienced significant improvements in these markers and another showing minimal or no change. Patients in the cluster associated with significant improvement exhibited lower HbA1C, FBS, and LDL levels after vitamin D supplementation, while HDL and total vitamin D levels increased. The analysis showed that vitamin D supplementation was particularly effective in reducing HbA1C, FBS, and LDL within this cluster. Furthermore, BMI, weight gain, and disease duration were identified as factors that influenced cluster assignment, with patients having lower BMI and shorter disease duration being more likely to belong to the improvement cluster. Conclusion: The findings of this machine learning-assisted meta-analysis confirm that vitamin D supplementation can significantly improve glycemic control and reduce the risk of cardiovascular complications in T2DM patients. The use of automated screening techniques streamlined the process, ensuring the comprehensive evaluation of a large body of evidence while maintaining the validity of traditional manual review processes.

Keywords: HbA1C, T2DM, SBP, FBS

Procedia PDF Downloads 14
57 Approach-Avoidance Conflict in the T-Maze: Behavioral Validation for Frontal EEG Activity Asymmetries

Authors: Eva Masson, Andrea Kübler

Abstract:

Anxiety disorders (AD) are the most prevalent psychological disorders. However, far from most affected individuals are diagnosed and receive treatment. This gap is probably due to the diagnosis criteria, relying on symptoms (according to the DSM-5 definition) with no objective biomarker. Approach-avoidance conflict tasks are one common approach to simulate such disorders in a lab setting, with most of the paradigms focusing on the relationships between behavior and neurophysiology. Approach-avoidance conflict tasks typically place participants in a situation where they have to make a decision that leads to both positive and negative outcomes, thereby sending conflicting signals that trigger the Behavioral Inhibition System (BIS). Furthermore, behavioral validation of such paradigms adds credibility to the tasks – with overt conflict behavior, it is safer to assume that the task actually induced a conflict. Some of those tasks have linked asymmetrical frontal brain activity to induced conflicts and the BIS. However, there is currently no consensus for the direction of the frontal activation. The authors present here a modified version of the T-Maze paradigm, a motivational conflict desktop task, in which behavior is recorded simultaneously to the recording of high-density EEG (HD-EEG). Methods: In this within-subject design, HD-EEG and behavior of 35 healthy participants was recorded. EEG data was collected with a 128 channels sponge-based system. The motivational conflict desktop task consisted of three blocks of repeated trials. Each block was designed to record a slightly different behavioral pattern, to increase the chances of eliciting conflict. This variety of behavioral patterns was however similar enough to allow comparison of the number of trials categorized as ‘overt conflict’ between the blocks. Results: Overt conflict behavior was exhibited in all blocks, but always for under 10% of the trials, in average, in each block. However, changing the order of the paradigms successfully introduced a ‘reset’ of the conflict process, therefore providing more trials for analysis. As for the EEG correlates, the authors expect a different pattern for trials categorized as conflict, compared to the other ones. More specifically, we expect an elevated alpha frequency power in the left frontal electrodes at around 200ms post-cueing, compared to the right one (relative higher right frontal activity), followed by an inversion around 600ms later. Conclusion: With this comprehensive approach of a psychological mechanism, new evidence would be brought to the frontal asymmetry discussion, and its relationship with the BIS. Furthermore, with the present task focusing on a very particular type of motivational approach-avoidance conflict, it would open the door to further variations of the paradigm to introduce different kinds of conflicts involved in AD. Even though its application as a potential biomarker sounds difficult, because of the individual reliability of both the task and peak frequency in the alpha range, we hope to open the discussion for task robustness for neuromodulation and neurofeedback future applications.

Keywords: anxiety, approach-avoidance conflict, behavioral inhibition system, EEG

Procedia PDF Downloads 40
56 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking

Authors: Noga Bregman

Abstract:

Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.

Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves

Procedia PDF Downloads 55