Search results for: reduced order macro models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22904

Search results for: reduced order macro models

21314 Image Captioning with Vision-Language Models

Authors: Promise Ekpo Osaine, Daniel Melesse

Abstract:

Image captioning is an active area of research in the multi-modal artificial intelligence (AI) community as it connects vision and language understanding, especially in settings where it is required that a model understands the content shown in an image and generates semantically and grammatically correct descriptions. In this project, we followed a standard approach to a deep learning-based image captioning model, injecting architecture for the encoder-decoder setup, where the encoder extracts image features, and the decoder generates a sequence of words that represents the image content. As such, we investigated image encoders, which are ResNet101, InceptionResNetV2, EfficientNetB7, EfficientNetV2M, and CLIP. As a caption generation structure, we explored long short-term memory (LSTM). The CLIP-LSTM model demonstrated superior performance compared to the encoder-decoder models, achieving a BLEU-1 score of 0.904 and a BLEU-4 score of 0.640. Additionally, among the CNN-LSTM models, EfficientNetV2M-LSTM exhibited the highest performance with a BLEU-1 score of 0.896 and a BLEU-4 score of 0.586 while using a single-layer LSTM.

Keywords: multi-modal AI systems, image captioning, encoder, decoder, BLUE score

Procedia PDF Downloads 82
21313 BIM-Based Tool for Sustainability Assessment and Certification Documents Provision

Authors: Taki Eddine Seghier, Mohd Hamdan Ahmad, Yaik-Wah Lim, Samuel Opeyemi Williams

Abstract:

The assessment of building sustainability to achieve a specific green benchmark and the preparation of the required documents in order to receive a green building certification, both are considered as major challenging tasks for green building design team. However, this labor and time-consuming process can take advantage of the available Building Information Modeling (BIM) features such as material take-off and scheduling. Furthermore, the workflow can be automated in order to track potentially achievable credit points and provide rating feedback for several design options by using integrated Visual Programing (VP) to handle the stored parameters within the BIM model. Hence, this study proposes a BIM-based tool that uses Green Building Index (GBI) rating system requirements as a unique input case to evaluate the building sustainability in the design stage of the building project life cycle. The tool covers two key models for data extraction, firstly, a model for data extraction, calculation and the classification of achievable credit points in a green template, secondly, a model for the generation of the required documents for green building certification. The tool was validated on a BIM model of residential building and it serves as proof of concept that building sustainability assessment of GBI certification can be automatically evaluated and documented through BIM.

Keywords: green building rating system, GBRS, building information modeling, BIM, visual programming, VP, sustainability assessment

Procedia PDF Downloads 330
21312 Empirical Analyses of Students’ Self-Concepts and Their Mathematics Achievements

Authors: Adetunji Abiola Olaoye

Abstract:

The study examined the students’ self-concepts and mathematics achievement viz-a-viz the existing three theoretical models: Humanist self-concept (M1), Contemporary self-concept (M2) and Skills development self-concept (M3). As a qualitative research study, it comprised of one research question, which was transformed into hypothesis viz-a-viz the existing theoretical models. Sample to the study comprised of twelve public secondary schools from which twenty-five mathematics teachers, twelve counselling officers and one thousand students of Upper Basic II were selected based on intact class as school administrations and system did not allow for randomization. Two instruments namely 10 items ‘Achievement test in Mathematics’ (r1=0.81) and 10 items Student’s self-concept questionnaire (r2=0.75) were adapted, validated and used for the study. Data were analysed through descriptive, one way ANOVA, t-test and correlation statistics at 5% level of significance. Finding revealed mean and standard deviation of pre-achievement test scores of (51.322, 16.10), (54.461, 17.85) and (56.451, 18.22) for the Humanist Self-Concept, Contemporary Self-Concept and Skill Development Self-Concept respectively. Apart from that study showed that there was significant different in the academic performance of students along the existing models (F-cal>F-value, df = (2,997); P<0.05). Furthermore, study revealed students’ achievement in mathematics and self-concept questionnaire with the mean and standard deviation of (57.4, 11.35) and (81.6, 16.49) respectively. Result confirmed an affirmative relationship with the Contemporary Self-Concept model that expressed an individual subject and specific self-concept as the primary determinants of higher academic achievement in the subject as there is a statistical correlation between students’ self-concept and mathematics achievement viz-a-viz the existing three theoretical models of Contemporary (M2) with -Z_cal<-Z_val, df=998: P<0.05*. The implication of the study was discussed with recommendations and suggestion for further studies proffered.

Keywords: contemporary, humanists, self-concepts, skill development

Procedia PDF Downloads 244
21311 Relation Between Traffic Mix and Traffic Accidents in a Mixed Industrial Urban Area

Authors: Michelle Eliane Hernández-García, Angélica Lozano

Abstract:

The traffic accidents study usually contemplates the relation between factors such as the type of vehicle, its operation, and the road infrastructure. Traffic accidents can be explained by different factors, which have a greater or lower relevance. Two zones are studied, a mixed industrial zone and the extended zone of it. The first zone has mainly residential (57%), and industrial (23%) land uses. Trucks are mainly on the roads where industries are located. Four sensors give information about traffic and speed on the main roads. The extended zone (which includes the first zone) has mainly residential (47%) and mixed residential (43%) land use, and just 3% of industrial use. The traffic mix is composed mainly of non-trucks. 39 traffic and speed sensors are located on main roads. The traffic mix in a mixed land use zone, could be related to traffic accidents. To understand this relation, it is required to identify the elements of the traffic mix which are linked to traffic accidents. Models that attempt to explain what factors are related to traffic accidents have faced multiple methodological problems for obtaining robust databases. Poisson regression models are used to explain the accidents. The objective of the Poisson analysis is to estimate a vector to provide an estimate of the natural logarithm of the mean number of accidents per period; this estimate is achieved by standard maximum likelihood procedures. For the estimation of the relation between traffic accidents and the traffic mix, the database is integrated of eight variables, with 17,520 observations and six vectors. In the model, the dependent variable is the occurrence or non-occurrence of accidents, and the vectors that seek to explain it, correspond to the vehicle classes: C1, C2, C3, C4, C5, and C6, respectively, standing for car, microbus, and van, bus, unitary trucks (2 to 6 axles), articulated trucks (3 to 6 axles) and bi-articulated trucks (5 to 9 axles); in addition, there is a vector for the average speed of the traffic mix. A Poisson model is applied, using a logarithmic link function and a Poisson family. For the first zone, the Poisson model shows a positive relation among traffic accidents and C6, average speed, C3, C2, and C1 (in a decreasing order). The analysis of the coefficient shows a high relation with bi-articulated truck and bus (C6 and the C3), indicating an important participation of freight trucks. For the expanded zone, the Poisson model shows a positive relation among traffic accidents and speed average, biarticulated truck (C6), and microbus and vans (C2). The coefficients obtained in both Poisson models shows a higher relation among freight trucks and traffic accidents in the first industrial zone than in the expanded zone.

Keywords: freight transport, industrial zone, traffic accidents, traffic mix, trucks

Procedia PDF Downloads 132
21310 Optimized Text Summarization Model on Mobile Screens for Sight-Interpreters: An Empirical Study

Authors: Jianhua Wang

Abstract:

To obtain key information quickly from long texts on small screens of mobile devices, sight-interpreters need to establish optimized summarization model for fast information retrieval. Four summarization models based on previous studies were studied including title+key words (TKW), title+topic sentences (TTS), key words+topic sentences (KWTS) and title+key words+topic sentences (TKWTS). Psychological experiments were conducted on the four models for three different genres of interpreting texts to establish the optimized summarization model for sight-interpreters. This empirical study shows that the optimized summarization model for sight-interpreters to quickly grasp the key information of the texts they interpret is title+key words (TKW) for cultural texts, title+key words+topic sentences (TKWTS) for economic texts and topic sentences+key words (TSKW) for political texts.

Keywords: different genres, mobile screens, optimized summarization models, sight-interpreters

Procedia PDF Downloads 319
21309 A New Converter Topology for Wind Energy Conversion System

Authors: Mahmoud Khamaira, Ahmed Abu-Siada, Yasser Alharbi

Abstract:

Doubly Fed Induction Generators (DFIGs) are currently extensively used in variable speed wind power plants due to their superior advantages that include reduced converter rating, low cost, reduced losses, easy implementation of power factor correction schemes, variable speed operation and four quadrants active and reactive power control capabilities. On the other hand, DFIG sensitivity to grid disturbances, especially for voltage sags represents the main disadvantage of the equipment. In this paper, a coil is proposed to be integrated within the DFIG converters to improve the overall performance of a DFIG-based wind energy conversion system (WECS). The charging and discharging of the coil are controlled by controlling the duty cycle of the switches of the dc-dc chopper. Simulation results reveal the effectiveness of the proposed topology in improving the overall performance of the WECS system under study.

Keywords: doubly fed induction generator, coil, wind energy conversion system, converter topology

Procedia PDF Downloads 664
21308 Model Observability – A Monitoring Solution for Machine Learning Models

Authors: Amreth Chandrasehar

Abstract:

Machine Learning (ML) Models are developed and run in production to solve various use cases that help organizations to be more efficient and help drive the business. But this comes at a massive development cost and lost business opportunities. According to the Gartner report, 85% of data science projects fail, and one of the factors impacting this is not paying attention to Model Observability. Model Observability helps the developers and operators to pinpoint the model performance issues data drift and help identify root cause of issues. This paper focuses on providing insights into incorporating model observability in model development and operationalizing it in production.

Keywords: model observability, monitoring, drift detection, ML observability platform

Procedia PDF Downloads 117
21307 The Role of the Basel Accords in Mitigating Systemic Risk

Authors: Wassamon Kun-Amornpong

Abstract:

When a financial crisis occurs, there will be a law and regulatory reform in order to manage the turmoil and prevent a future crisis. One of the most important regulatory efforts to help cope with systemic risk and a financial crisis is the third version of the Basel Accord. Basel III has introduced some measures and tools (e.g., systemic risk buffer, countercyclical buffer, capital conservation buffer and liquidity risk) in order to mitigate systemic risk. Nevertheless, the effectiveness of these measures in Basel III in adequately addressing the problem of contagious runs that can quickly spread throughout the financial system is questionable. This paper seeks to contribute to the knowledge regarding the role of the Basel Accords in mitigating systemic risk. The research question is to what extent the Basel Accords can help control systemic risk in the financial markets? The paper tackles this question by analysing the concept of systemic risk. It will then examine the weaknesses of the Basel Accords before and after the Global financial crisis in 2008. Finally, it will suggest some possible solutions in order to improve the Basel Accord. The rationale of the study is the fact that academic works on systemic risk and financial crises are largely studied from economic or financial perspective. There is comparatively little research from the legal and regulatory perspective. The finding of the paper is that there are some problems in all of the three pillars of the Basel Accords. With regards to Pillar I, the risk model is excessively complex while the benefits of its complexity are doubtful. Concerning Pillar II, the effectiveness of the risk-based supervision in preventing systemic risk still depends largely upon its design and implementation. Factors such as organizational culture of the regulator and the political context within which the risk-based supervision operates might be a barrier against the success of Pillar II. Meanwhile, Pillar III could not provide adequate market discipline as market participants do not always act in a rational way. In addition, the too-big-to-fail perception reduced the incentives of the market participants to monitor risks. There has been some development in resolution measure (e.g. TLAC and MREL) which might potentially help strengthen the incentive of the market participants to monitor risks. However, those measures have some weaknesses. The paper argues that if the weaknesses in the three pillars are resolved, it can be expected that the Basel Accord could contribute to the mitigation of systemic risk in a more significant way in the future.

Keywords: Basel accords, financial regulation, risk-based supervision, systemic risk

Procedia PDF Downloads 132
21306 A Study on Weight-Reduction of Double Deck High-Speed Train Using Size Optimization Method

Authors: Jong-Yeon Kim, Kwang-Bok Shin, Tae-Hwan Ko

Abstract:

The purpose of this paper is to suggest a weight-reduction design method for the aluminum extrusion carbody structure of a double deck high-speed train using size optimization method. The size optimization method was used to optimize thicknesses of skin and rib of the aluminum extrusion for the carbody structure. Thicknesses of 1st underframe, 2nd underframe, solebar and roof frame were selected by design variables in order to conduct size optimization. The results of the size optimization analysis showed that the weight of the aluminum extrusion could be reduced by 0.61 tons (5.60%) compared to the weight of the original carbody structure.

Keywords: double deck high-speed train, size optimization, weigh-reduction, aluminum extrusion

Procedia PDF Downloads 294
21305 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing

Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto

Abstract:

In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.

Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration

Procedia PDF Downloads 250
21304 Co-payment Strategies for Chronic Medications: A Qualitative and Comparative Analysis at European Level

Authors: Pedro M. Abreu, Bruno R. Mendes

Abstract:

The management of pharmacotherapy and the process of dispensing medicines is becoming critical in clinical pharmacy due to the increase of incidence and prevalence of chronic diseases, the complexity and customization of therapeutic regimens, the introduction of innovative and more expensive medicines, the unbalanced relation between expenditure and revenue as well as due to the lack of rationalization associated with medication use. For these reasons, co-payments emerged in Europe in the 70s and have been applied over the past few years in healthcare. Co-payments lead to a rationing and rationalization of user’s access under healthcare services and products, and simultaneously, to a qualification and improvement of the services and products for the end-user. This analysis, under hospital practices particularly and co-payment strategies in general, was carried out on all the European regions and identified four reference countries, that apply repeatedly this tool and with different approaches. The structure, content and adaptation of European co-payments were analyzed through 7 qualitative attributes and 19 performance indicators, and the results expressed in a scorecard, allowing to conclude that the German models (total score of 68,2% and 63,6% in both elected co-payments) can collect more compliance and effectiveness, the English models (total score of 50%) can be more accessible, and the French models (total score of 50%) can be more adequate to the socio-economic and legal framework. Other European models did not show the same quality and/or performance, so were not taken as a standard in the future design of co-payments strategies. In this sense, we can see in the co-payments a strategy not only to moderate the consumption of healthcare products and services, but especially to improve them, as well as a strategy to increment the value that the end-user assigns to these services and products, such as medicines.

Keywords: clinical pharmacy, co-payments, healthcare, medicines

Procedia PDF Downloads 256
21303 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 270
21302 Production of Pig Iron by Smelting of Blended Pre-Reduced Titaniferous Magnetite Ore and Hematite Ore Using Lean Grade Coal

Authors: Bitan Kumar Sarkar, Akashdeep Agarwal, Rajib Dey, Gopes Chandra Das

Abstract:

The rapid depletion of high-grade iron ore (Fe2O3) has gained attention on the use of other sources of iron ore. Titaniferous magnetite ore (TMO) is a special type of magnetite ore having high titania content (23.23% TiO2 present in this case). Due to high TiO2 content and high density, TMO cannot be treated by the conventional smelting reduction. In this present work, the TMO has been collected from high-grade metamorphic terrain of the Precambrian Chotanagpur gneissic complex situated in the eastern part of India (Shaltora area, Bankura district, West Bengal) and the hematite ore has been collected from Visakhapatnam Steel Plant (VSP), Visakhapatnam. At VSP, iron ore is received from Bailadila mines, Chattisgarh of M/s. National Mineral Development Corporation. The preliminary characterization of TMO and hematite ore (HMO) has been investigated by WDXRF, XRD and FESEM analyses. Similarly, good quality of coal (mainly coking coal) is also getting depleted fast. The basic purpose of this work is to find how lean grade coal can be utilised along with TMO for smelting to produce pig iron. Lean grade coal has been characterised by using TG/DTA, proximate and ultimate analyses. The boiler grade coal has been found to contain 28.08% of fixed carbon and 28.31% of volatile matter. TMO fines (below 75 μm) and HMO fines (below 75 μm) have been separately agglomerated with lean grade coal fines (below 75 μm) in the form of briquettes using binders like bentonite and molasses. These green briquettes are dried first in oven at 423 K for 30 min and then reduced isothermally in tube furnace over the temperature range of 1323 K, 1373 K and 1423 K for 30 min & 60 min. After reduction, the reduced briquettes are characterized by XRD and FESEM analyses. The best reduced TMO and HMO samples are taken and blended in three different weight percentage ratios of 1:4, 1:8 and 1:12 of TMO:HMO. The chemical analysis of three blended samples is carried out and degree of metallisation of iron is found to contain 89.38%, 92.12% and 93.12%, respectively. These three blended samples are briquetted using binder like bentonite and lime. Thereafter these blended briquettes are separately smelted in raising hearth furnace at 1773 K for 30 min. The pig iron formed is characterized using XRD, microscopic analysis. It can be concluded that 90% yield of pig iron can be achieved when the blend ratio of TMO:HMO is 1:4.5. This means for 90% yield, the maximum TMO that could be used in the blend is about 18%.

Keywords: briquetting reduction, lean grade coal, smelting reduction, TMO

Procedia PDF Downloads 324
21301 Fuzzy-Machine Learning Models for the Prediction of Fire Outbreak: A Comparative Analysis

Authors: Uduak Umoh, Imo Eyoh, Emmauel Nyoho

Abstract:

This paper compares fuzzy-machine learning algorithms such as Support Vector Machine (SVM), and K-Nearest Neighbor (KNN) for the predicting cases of fire outbreak. The paper uses the fire outbreak dataset with three features (Temperature, Smoke, and Flame). The data is pre-processed using Interval Type-2 Fuzzy Logic (IT2FL) algorithm. Min-Max Normalization and Principal Component Analysis (PCA) are used to predict feature labels in the dataset, normalize the dataset, and select relevant features respectively. The output of the pre-processing is a dataset with two principal components (PC1 and PC2). The pre-processed dataset is then used in the training of the aforementioned machine learning models. K-fold (with K=10) cross-validation method is used to evaluate the performance of the models using the matrices – ROC (Receiver Operating Curve), Specificity, and Sensitivity. The model is also tested with 20% of the dataset. The validation result shows KNN is the better model for fire outbreak detection with an ROC value of 0.99878, followed by SVM with an ROC value of 0.99753.

Keywords: Machine Learning Algorithms , Interval Type-2 Fuzzy Logic, Fire Outbreak, Support Vector Machine, K-Nearest Neighbour, Principal Component Analysis

Procedia PDF Downloads 190
21300 Reduced Glycaemic Impact by Kiwifruit-Based Carbohydrate Exchanges Depends on Both Available Carbohydrate and Non-Digestible Fruit Residue

Authors: S. Mishra, J. Monro, H. Edwards, J. Podd

Abstract:

When a fruit such as kiwifruit is consumed its tissues are released from the physical /anatomical constraints existing in the fruit. During digestion they may expand several-fold to achieve a hydrated solids volume far greater than the original fruit, and occupy the available space in the gut, where they surround and interact with other food components. Within the cell wall dispersion, in vitro digestion of co-consumed carbohydrate, diffusion of digestion products, and mixing responsible for mass transfer of nutrients to the gut wall for absorption, were all retarded. All of the foregoing processes may be involved in the glycaemic response to carbohydrate foods consumed with kiwifruit, such as breakfast cereal. To examine their combined role in reducing the glycaemic response to wheat cereal consumed with kiwifruit we formulated diets containing equal amounts of breakfast cereal, with the addition of either kiwifruit, or sugars of the same composition and quantity as in kiwifruit. Therefore, the only difference between the diets was the presence of non-digestible fruit residues. The diet containing the entire disperse kiwifruit significantly reduced the glycaemic response amplitude and the area under the 0-120 min incremental blood glucose response curve (IAUC), compared with the equicarbohydrate diet containing the added kiwifruit sugars. It also slightly but significantly increased the 120-180 min IAUC by preventing a postprandial overcompensation, indicating improved homeostatic blood glucose control. In a subsequent study in which we used kiwifruit in a carbohydrate exchange format, in which the kiwifruit carbohydrate partially replaced breakfast cereal in equal carbohydrate meals, the blood glucose was further reduced without a loss of satiety, and with a reduction in insulin demand. The results show that kiwifruit may be a valuable component in low glycaemic impact diets.

Keywords: carbohydrate, digestion, glycaemic response, kiwifruit

Procedia PDF Downloads 499
21299 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications

Authors: H. Hruschka

Abstract:

This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.

Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models

Procedia PDF Downloads 206
21298 Elastoplastic and Ductile Damage Model Calibration of Steels for Bolt-Sphere Joints Used in China’s Space Structure Construction

Authors: Huijuan Liu, Fukun Li, Hao Yuan

Abstract:

The bolted spherical node is a common type of joint in space steel structures. The bolt-sphere joint portion almost always controls the bearing capacity of the bolted spherical node. The investigation of the bearing performance and progressive failure in service often requires high-fidelity numerical models. This paper focuses on the constitutive models of bolt steel and sphere steel used in China’s space structure construction. The elastoplastic model is determined by a standard tensile test and calibrated Voce saturated hardening rule. The ductile damage is found dominant based on the fractography analysis. Then Rice-Tracey ductile fracture rule is selected and the model parameters are calibrated based on tensile tests of notched specimens. These calibrated material models can benefit research or engineering work in similar fields.

Keywords: bolt-sphere joint, steel, constitutive model, ductile damage, model calibration

Procedia PDF Downloads 142
21297 Influence of Gestational Diabetes Mellitus on the Activity of Steroid C17-Hydroxylase-C17,20-Lyase in Patients with Intrahepatic Cholestasis of Pregnancy

Authors: Leona Ondrejikova, Martin Hill, Antonin Parizek

Abstract:

The incidence of gestational diabetes mellitus (GDM) is higher in women predisposed to developing intrahepatic cholestasis of pregnancy (ICP). Both diseases are associated with altered steroidogenesis when compared with none-ICP controls. However, the effect of GDM on circulating steroids in ICP patients remains unclear. The question remains, whether the levels of circulating steroids differ between ICP patients with and without GDM. In total 10 ICP patients without GDM (ICP+GDM-), 7 ICP patients with GDM (ICP+GDM+), and 15 controls (ICP-GDM-) were monitored during late gestation, at labor, and during three periods postpartum (day 5, week 3, and week 6 postpartum) (Šimják et al., 2018). The relationships between steroid profiles and patients’ status were evaluated using the ANOVA model consisting of subject factor, between-subject factors Group (ICP+GDM+, ICP+GDM-, ICP-GDM-), gestational age at the diagnosis of ICP and gestational age at labor, and within-subject factor Stage and ICP × Stage interaction. The levels of the C21 and C19 Δ5 steroids and 5α/β-reduced C19 steroids were highest in ICP+GDM+, while those for the ICP-GDM-, and ICP+GDM- groups were lower. In the C21 Δ4 steroids and their 5α/β-reduced metabolites, the steroid levels were highest in the ICP+GDM-, intermediate in the ICP-GDM- and lowest in the ICP+GDM+ group. This higher concentration in ICP+GDM- group may be of importance as the 5α-pregnane-3α,20α-diol disulfate, is considered as the substance inducing ICP. In general, these data show that the comorbidity with GDM substantially changes the steroidome in ICP patients towards the higher activity of steroid CYP17A1 lyase step in adrenal zona reticularis reduced CYP17A1 hydroxylase step in zona fasciculata. This is consistent with our previously published hypothesis about the critical role of maternal zona reticularis in the pathophysiology of ICP. Our present data also indicate that the comorbidity with GDM might moderate the gravity of the ICP in this way.

Keywords: CYP17A1, GC-MS, gestational diabetes mellitus, intrahepatic cholestasis of pregnancy

Procedia PDF Downloads 143
21296 Modeling Core Flooding Experiments for Co₂ Geological Storage Applications

Authors: Avinoam Rabinovich

Abstract:

CO₂ geological storage is a proven technology for reducing anthropogenic carbon emissions, which is paramount for achieving the ambitious net zero emissions goal. Core flooding experiments are an important step in any CO₂ storage project, allowing us to gain information on the flow of CO₂ and brine in the porous rock extracted from the reservoir. This information is important for understanding basic mechanisms related to CO₂ geological storage as well as for reservoir modeling, which is an integral part of a field project. In this work, a different method for constructing accurate models of CO₂-brine core flooding will be presented. Results for synthetic cases and real experiments will be shown and compared with numerical models to exhibit their predictive capabilities. Furthermore, the various mechanisms which impact the CO₂ distribution and trapping in the rock samples will be discussed, and examples from models and experiments will be provided. The new method entails solving an inverse problem to obtain a three-dimensional permeability distribution which, along with the relative permeability and capillary pressure functions, constitutes a model of the flow experiments. The model is more accurate when data from a number of experiments are combined to solve the inverse problem. This model can then be used to test various other injection flow rates and fluid fractions which have not been tested in experiments. The models can also be used to bridge the gap between small-scale capillary heterogeneity effects (sub-core and core scale) and large-scale (reservoir scale) effects, known as the upscaling problem.

Keywords: CO₂ geological storage, residual trapping, capillary heterogeneity, core flooding, CO₂-brine flow

Procedia PDF Downloads 77
21295 A Tool for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the easy creation of an institutional risk profile for endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support risk factors set up with just the most important values that are important for a particular organisation. Subsequently, the risk profile employs fuzzy models and associated configurations for the file format metadata aggregator to support digital preservation experts with a semi-automatic estimation of endangerment level for file formats. Our goal is to make use of a domain expert knowledge base aggregated from a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation and analysis of risk factors for a requried dimension. The proposed methods improve the visibility of risk factor information and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and automatically aggregated file format metadata from linked open data sources. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: digital information management, file format, endangerment analysis, fuzzy models

Procedia PDF Downloads 411
21294 Executive Order as an Effective Tool in Combating Insecurities and Human Rights Violations: The Case of the Special Anti-Robbery Squad and Youths in Nigeria

Authors: Cita Ayeni

Abstract:

Following countless violations of Human Rights in Nigeria by the various arms and agencies of government; from the Military to the Federal Police and other law enforcement agencies, Nigeria has been riddled with several reports of acts by these agencies against the citizens, ranging from illegal arrest and imprisonment, torture, disappearing, and extrajudicial killings, just to mention a few. This paper, focuses on SARS (Special Anti-Robbery Squad), a division of the Nigeria Police Force, and its reported threats to the people’s security, particularly the Nigerian youths, with continuous violence, extortion, illegal arrest and imprisonment, terror, and extrajudicial activities resulting in maiming and in most cases death, thus infringing on the human rights of the people it’s sworn to protect. This research further analyses how the activities of SARS has over the years instigated fear on the average Nigerian youth, preventing the free participation in daily life, education, job, and individual development, in turn impeding the realization of their full potentials for growth and participation in collective national development. This research analyzes the executive order by the then Acting President (Vice-President) of Nigeria, directing the overhauling of SARS, and its implementation by the Federal Police Force in determining if it’s enough to prevent or put a stop to the continuous Human Rights abuse and threat to the security of the individual citizen. Concluding that although the order by the Acting President was given with an intent to halt the various violations by SARS, and the Inspector General of Police’s (IGP) subsequent action by releasing a statement following the order, the bureaucracy in Nigeria, with a history of incompetency and a return to 'business as usual' after a reduced public outcry, it’s most likely that there will not be adequate follow up put in place and these violations would be slowly 'swept under the rug' with SARS officials not held accountable. It is recommended therefore that the Federal Government through the NPF, following the reforms made, in collaboration with the mentioned Independent Human Rights and civil societies organizations should periodically produce unbiased and publicly accessible reports on the implementation of these reforms and progress made. This will go a long way in assuring the public of actual fulfillment of the restructuring, reduce fear by the youths and restore some public faith in the government.

Keywords: special anti-robbery squad, youths in Nigeria, overhaul, insecurities, human rights violations

Procedia PDF Downloads 308
21293 Developing A Third Degree Of Freedom For Opinion Dynamics Models Using Scales

Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle

Abstract:

Opinion dynamics models use an agent-based modeling approach to model people’s opinions. Model's properties are usually explored by testing the two 'degrees of freedom': the interaction rule and the network topology. The latter defines the connection, and thus the possible interaction, among agents. The interaction rule, instead, determines how agents select each other and update their own opinion. Here we show the existence of the third degree of freedom. This can be used for turning one model into each other or to change the model’s output up to 100% of its initial value. Opinion dynamics models represent the evolution of real-world opinions parsimoniously. Thus, it is fundamental to know how real-world opinion (e.g., supporting a candidate) could be turned into a number. Specifically, we want to know if, by choosing a different opinion-to-number transformation, the model’s dynamics would be preserved. This transformation is typically not addressed in opinion dynamics literature. However, it has already been studied in psychometrics, a branch of psychology. In this field, real-world opinions are converted into numbers using abstract objects called 'scales.' These scales can be converted one into the other, in the same way as we convert meters to feet. Thus, in our work, we analyze how this scale transformation may affect opinion dynamics models. We perform our analysis both using mathematical modeling and validating it via agent-based simulations. To distinguish between scale transformation and measurement error, we first analyze the case of perfect scales (i.e., no error or noise). Here we show that a scale transformation may change the model’s dynamics up to a qualitative level. Meaning that a researcher may reach a totally different conclusion, even using the same dataset just by slightly changing the way data are pre-processed. Indeed, we quantify that this effect may alter the model’s output by 100%. By using two models from the standard literature, we show that a scale transformation can transform one model into the other. This transformation is exact, and it holds for every result. Lastly, we also test the case of using real-world data (i.e., finite precision). We perform this test using a 7-points Likert scale, showing how even a small scale change may result in different predictions or a number of opinion clusters. Because of this, we think that scale transformation should be considered as a third-degree of freedom for opinion dynamics. Indeed, its properties have a strong impact both on theoretical models and for their application to real-world data.

Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics

Procedia PDF Downloads 159
21292 Adsorptive Performance of Surface Modified Montmorillonite in Vanadium Removal from Real Mine Water

Authors: Opeyemi Atiba-Oyewo, Taile Y. Leswfi, Maurice S. Onyango, Christian Wolkersdorfer

Abstract:

This paper describes the preparation of surface modified montmorillonite using hexadecyltrimethylammonium bromide (HDTMA-Br) for the removal of vanadium from mine water. The adsorbent before and after adsorption was characterised by Fourier transform infra-red (FT-IR), X-ray diffraction (XRD) and scanning electron microscopy (SEM), while the amount of vanadium adsorbed was determined by ICP-OES. The batch adsorption method was employed using vanadium concentrations in solution ranging from 50 to 320 mg/L and vanadium tailings seepage water from a South African mine. Also, solution pH, temperature and sorbent mass were varied. Results show that the adsorption capacity was affected by solution pH, temperature, sorbent mass and the initial concentration. Electrical conductivity of the mine water before and after adsorption was measured to estimate the total dissolved solids in the mine water. Equilibrium isotherm results revealed that vanadium sorption follows the Freundlich isotherm, indicating that the surface of the sorbent was heterogeneous. The pseudo-second order kinetic model gave the best fit to the kinetic experimental data compared to the first order and Elovich models. The results of this study may be used to predict the uptake efficiency of South Africa montmorillonite in view of its application for the removal of vanadium from mine water. However, the choice of this adsorbent for the uptake of vanadium or other contaminants will depend on the composition of the effluent to be treated.

Keywords: adsorption, vanadium, modified montmorillonite, equilibrium, kinetics, mine water

Procedia PDF Downloads 436
21291 Understanding the Role of Gas Hydrate Morphology on the Producibility of a Hydrate-Bearing Reservoir

Authors: David Lall, Vikram Vishal, P. G. Ranjith

Abstract:

Numerical modeling of gas production from hydrate-bearing reservoirs requires the solution of various thermal, hydrological, chemical, and mechanical phenomena in a coupled manner. Among the various reservoir properties that influence gas production estimates, the distribution of permeability across the domain is one of the most crucial parameters since it determines both heat transfer and mass transfer. The aspect of permeability in hydrate-bearing reservoirs is particularly complex compared to conventional reservoirs since it depends on the saturation of gas hydrates and hence, is dynamic during production. The dependence of permeability on hydrate saturation is mathematically represented using permeability-reduction models, which are specific to the expected morphology of hydrate accumulations (such as grain-coating or pore-filling hydrates). In this study, we demonstrate the impact of various permeability-reduction models, and consequently, different morphologies of hydrate deposits on the estimates of gas production using depressurization at the reservoir scale. We observe significant differences in produced water volumes and cumulative mass of produced gas between the models, thereby highlighting the uncertainty in production behavior arising from the ambiguity in the prevalent gas hydrate morphology.

Keywords: gas hydrate morphology, multi-scale modeling, THMC, fluid flow in porous media

Procedia PDF Downloads 225
21290 Implementation and Validation of a Damage-Friction Constitutive Model for Concrete

Authors: L. Madouni, M. Ould Ouali, N. E. Hannachi

Abstract:

Two constitutive models for concrete are available in ABAQUS/Explicit, the Brittle Cracking Model and the Concrete Damaged Plasticity Model, and their suitability and limitations are well known. The aim of the present paper is to implement a damage-friction concrete constitutive model and to evaluate the performance of this model by comparing the predicted response with experimental data. The constitutive formulation of this material model is reviewed. In order to have consistent results, the parameter identification and calibration for the model have been performed. Several numerical simulations are presented in this paper, whose results allow for validating the capability of the proposed model for reproducing the typical nonlinear performances of concrete structures under different monotonic and cyclic load conditions. The results of the evaluation will be used for recommendations concerning the application and further improvements of the investigated model.

Keywords: Abaqus, concrete, constitutive model, numerical simulation

Procedia PDF Downloads 370
21289 Biodegradable Cellulose-Based Materials for the Use in Food Packaging

Authors: Azza A. Al-Ghamdi, Abir S. Abdel-Naby

Abstract:

Cellulose acetate (CA) is a natural biodegradable polymer. It forms transparent films by the casting technique. CA suffers from high degree of water permeability as well as the low thermal stability at high temperatures. To adjust the CA polymeric films to the manufacture of food packaging, its thermal and mechanical properties should be improved. The modification of CA by grafting it with N-Amino phenyl maleimide (N-APhM) led to the construction of hydrophobic branches throughout the polymeric matrix which reduced its wettability as compared to the parent CA. The branches built onto the polymeric chains had been characterized by UV/Vis, 13C-NMR and ESEM. The improvement of the thermal properties was investigated and compared to the parent CA using thermal gravimetric analysis (TGA), differential scanning calorimetry (DSC), differential thermal analysis (DTA), contact angle and mechanical testing measurements. The results revealed that the water-uptake was reduced by increasing the graft percentage. The thermal and mechanical properties were also improved.

Keywords: cellulose acetate, food packaging, graft copolymerization, thermal properties

Procedia PDF Downloads 227
21288 Implicit Off-Grid Block Method for Solving Fourth and Fifth Order Ordinary Differential Equations Directly

Authors: Olusola Ezekiel Abolarin, Gift E. Noah

Abstract:

This research work considered an innovative procedure to numerically approximate higher-order Initial value problems (IVP) of ordinary differential equations (ODE) using the Legendre polynomial as the basis function. The proposed method is a half-step, self-starting Block integrator employed to approximate fourth and fifth order IVPs without reduction to lower order. The method was developed through a collocation and interpolation approach. The basic properties of the method, such as convergence, consistency and stability, were well investigated. Several test problems were considered, and the results compared favorably with both exact solutions and other existing methods.

Keywords: initial value problem, ordinary differential equation, implicit off-grid block method, collocation, interpolation

Procedia PDF Downloads 89
21287 First Order Reversal Curve Method for Characterization of Magnetic Nanostructures

Authors: Bashara Want

Abstract:

One of the key factors limiting the performance of magnetic memory is that the coercivity has a distribution with finite width, and the reversal starts at the weakest link in the distribution. So one must first know the distribution of coercivities in order to learn how to reduce the width of distribution and increase the coercivity field to obtain a system with narrow width. First Order Reversal Curve (FORC) method characterizes a system with hysteresis via the distribution of local coercivities and, in addition, the local interaction field. The method is more versatile than usual conventional major hysteresis loops that give only the statistical behaviour of the magnetic system. The FORC method will be presented and discussed at the conference.

Keywords: magnetic materials, hysteresis, first-order reversal curve method, nanostructures

Procedia PDF Downloads 86
21286 Aggregate Production Planning Framework in a Multi-Product Factory: A Case Study

Authors: Ignatio Madanhire, Charles Mbohwa

Abstract:

This study looks at the best model of aggregate planning activity in an industrial entity and uses the trial and error method on spreadsheets to solve aggregate production planning problems. Also linear programming model is introduced to optimize the aggregate production planning problem. Application of the models in a furniture production firm is evaluated to demonstrate that practical and beneficial solutions can be obtained from the models. Finally some benchmarking of other furniture manufacturing industries was undertaken to assess relevance and level of use in other furniture firms

Keywords: aggregate production planning, trial and error, linear programming, furniture industry

Procedia PDF Downloads 562
21285 Machine Learning Techniques for Estimating Ground Motion Parameters

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.

Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine

Procedia PDF Downloads 125