Search results for: prediction
371 The Extent of Land Use Externalities in the Fringe of Jakarta Metropolitan: An Application of Spatial Panel Dynamic Land Value Model
Authors: Rahma Fitriani, Eni Sumarminingsih, Suci Astutik
Abstract:
In a fast growing region, conversion of agricultural lands which are surrounded by some new development sites will occur sooner than expected. This phenomenon has been experienced by many regions in Indonesia, especially the fringe of Jakarta (BoDeTaBek). Being Indonesia’s capital city, rapid conversion of land in this area is an unavoidable process. The land conversion expands spatially into the fringe regions, which were initially dominated by agricultural land or conservation sites. Without proper control or growth management, this activity will invite greater costs than benefits. The current land use is the use which maximizes its value. In order to maintain land for agricultural activity or conservation, some efforts are needed to keep the land value of this activity as high as possible. In this case, the knowledge regarding the functional relationship between land value and its driving forces is necessary. In a fast growing region, development externalities are the assumed dominant driving force. Land value is the product of the past decision of its use leading to its value. It is also affected by the local characteristics and the observed surrounded land use (externalities) from the previous period. The effect of each factor on land value has dynamic and spatial virtues; an empirical spatial dynamic land value model will be more useful to capture them. The model will be useful to test and to estimate the extent of land use externalities on land value in the short run as well as in the long run. It serves as a basis to formulate an effective urban growth management’s policy. This study will apply the model to the case of land value in the fringe of Jakarta Metropolitan. The model will be used further to predict the effect of externalities on land value, in the form of prediction map. For the case of Jakarta’s fringe, there is some evidence about the significance of neighborhood urban activity – negative externalities, the previous land value and local accessibility on land value. The effects are accumulated dynamically over years, but they will fully affect the land value after six years.Keywords: growth management, land use externalities, land value, spatial panel dynamic
Procedia PDF Downloads 256370 Evaluation of Compatibility between Produced and Injected Waters and Identification of the Causes of Well Plugging in a Southern Tunisian Oilfield
Authors: Sonia Barbouchi, Meriem Samcha
Abstract:
Scale deposition during water injection into aquifer of oil reservoirs is a serious problem experienced in the oil production industry. One of the primary causes of scale formation and injection well plugging is mixing two waters which are incompatible. Considered individually, the waters may be quite stable at system conditions and present no scale problems. However, once they are mixed, reactions between ions dissolved in the individual waters may form insoluble products. The purpose of this study is to identify the causes of well plugging in a southern Tunisian oilfield, where fresh water has been injected into the producing wells to counteract the salinity of the formation waters and inhibit the deposition of halite. X-ray diffraction (XRD) mineralogical analysis has been carried out on scale samples collected from the blocked well. Two samples collected from both formation water and injected water were analysed using inductively coupled plasma atomic emission spectroscopy, ion chromatography and other standard laboratory techniques. The results of complete waters analysis were the typical input parameters, to determine scaling tendency. Saturation indices values related to CaCO3, CaSO4, BaSO4 and SrSO4 scales were calculated for the water mixtures at different share, under various conditions of temperature, using a computerized scale prediction model. The compatibility study results showed that mixing the two waters tends to increase the probability of barite deposition. XRD analysis confirmed the compatibility study results, since it proved that the analysed deposits consisted predominantly of barite with minor galena. At the studied temperatures conditions, the tendency for barite scale is significantly increasing with the increase of fresh water share in the mixture. The future scale inhibition and removal strategies to be implemented in the concerned oilfield are being derived in a large part from the results of the present study.Keywords: compatibility study, produced water, scaling, water injection
Procedia PDF Downloads 166369 Improvement in Blast Furnace Performance Using Softening - Melting Zone Profile Prediction Model at G Blast Furnace, Tata Steel Jamshedpur
Authors: Shoumodip Roy, Ankit Singhania, K. R. K. Rao, Ravi Shankar, M. K. Agarwal, R. V. Ramna, Uttam Singh
Abstract:
The productivity of a blast furnace and the quality of the hot metal produced are significantly dependent on the smoothness and stability of furnace operation. The permeability of the furnace bed, as well as the gas flow pattern, influences the steady control of process parameters. The softening – melting zone that is formed inside the furnace contributes largely in distribution of the gas flow and the bed permeability. A better shape of softening-melting zone enhances the performance of blast furnace, thereby reducing the fuel rates and improving furnace life. Therefore, predictive model of the softening- melting zone profile can be utilized to control and improve the furnace operation. The shape of softening-melting zone depends upon the physical and chemical properties of the agglomerates and iron ore charged in the furnace. The variations in the agglomerate proportion in the burden at G Blast furnace disturbed the furnace stability. During such circumstances, it was analyzed that a w-shape softening-melting zone profile was formed inside the furnace. The formation of w-shape zone resulted in poor bed permeability and non-uniform gas flow. There was a significant increase in the heat loss at the lower zone of the furnace. The fuel demand increased, and the huge production loss was incurred. Therefore, visibility of softening-melting zone profile was necessary in order to pro-actively optimize the process parameters and thereby to operate the furnace smoothly. Using stave temperatures, a model was developed that predicted the shape of the softening-melting zone inside the furnace. It was observed that furnace operated smoothly during inverse V-shape of the zone and vice-versa during w-shape. This model helped to control the heat loss, optimize the burden distribution and lower the fuel rate at G Blast Furnace, TSL Jamshedpur. As a result of furnace stabilization productivity increased by 10% and fuel rate reduced by 80 kg/thm. Details of the process have been discussed in this paper.Keywords: agglomerate, blast furnace, permeability, softening-melting
Procedia PDF Downloads 252368 Role of P53, KI67 and Cyclin a Immunohistochemical Assay in Predicting Wilms’ Tumor Mortality
Authors: Ahmed Atwa, Ashraf Hafez, Mohamed Abdelhameed, Adel Nabeeh, Mohamed Dawaba, Tamer Helmy
Abstract:
Introduction and Objective: Tumour staging and grading do not usually reflect the future behavior of Wilms' tumor (WT) regarding mortality. Therefore, in this study, P53, Ki67 and cyclin A immunohistochemistry were used in a trial to predict WT cancer-specific survival (CSS). Methods: In this nonconcurrent cohort study, patients' archived data, including age at presentation, gender, history, clinical examination and radiological investigations, were retrieved then the patients were reviewed at the outpatient clinic of a tertiary care center by history-taking, clinical examination and radiological investigations to detect the oncological outcome. Cases that received preoperative chemotherapy or died due to causes other than WT were excluded. Formalin-fixed, paraffin-embedded specimens obtained from the previously preserved blocks at the pathology laboratory were taken on positively charged slides for IHC with p53, Ki67 and cyclin A. All specimens were examined by an experienced histopathologist devoted to the urological practice and blinded to the patient's clinical findings. P53 and cyclin A staining were scored as 0 (no nuclear staining),1 (<10% nuclear staining), 2 (10-50% nuclear staining) and 3 (>50% nuclear staining). Ki67 proliferation index (PI) was graded as low, borderline and high. Results: Of the 75 cases, 40 (53.3%) were males and 35 (46.7%) were females, and the median age was 36 months (2-216). With a mean follow-up of 78.6±31 months, cancer-specific mortality (CSM) occurred in 15 (20%) and 11 (14.7%) patients, respectively. Kaplan-Meier curve was used for survival analysis, and groups were compared using the Log-rank test. Multivariate logistic regression and Cox regression were not used because only one variable (cyclin A) had shown statistical significance (P=.02), whereas the other significant factor (residual tumor) had few cases. Conclusions: Cyclin A IHC should be considered as a marker for the prediction of WT CSS. Prospective studies with a larger sample size are needed.Keywords: wilms’ tumour, nephroblastoma, urology, survival
Procedia PDF Downloads 67367 Fuzzy Logic Classification Approach for Exponential Data Set in Health Care System for Predication of Future Data
Authors: Manish Pandey, Gurinderjit Kaur, Meenu Talwar, Sachin Chauhan, Jagbir Gill
Abstract:
Health-care management systems are a unit of nice connection as a result of the supply a straightforward and fast management of all aspects relating to a patient, not essentially medical. What is more, there are unit additional and additional cases of pathologies during which diagnosing and treatment may be solely allotted by victimization medical imaging techniques. With associate ever-increasing prevalence, medical pictures area unit directly acquired in or regenerate into digital type, for his or her storage additionally as sequent retrieval and process. Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Forecasting may be a prediction of what's going to occur within the future, associated it's an unsure method. Owing to the uncertainty, the accuracy of a forecast is as vital because the outcome foretold by foretelling the freelance variables. A forecast management should be wont to establish if the accuracy of the forecast is within satisfactory limits. Fuzzy regression strategies have normally been wont to develop shopper preferences models that correlate the engineering characteristics with shopper preferences relating to a replacement product; the patron preference models offer a platform, wherever by product developers will decide the engineering characteristics so as to satisfy shopper preferences before developing the merchandise. Recent analysis shows that these fuzzy regression strategies area units normally will not to model client preferences. We tend to propose a Testing the strength of Exponential Regression Model over regression toward the mean Model.Keywords: health-care management systems, fuzzy regression, data mining, forecasting, fuzzy membership function
Procedia PDF Downloads 279366 Achieving Process Stability through Automation and Process Optimization at H Blast Furnace Tata Steel, Jamshedpur
Authors: Krishnendu Mukhopadhyay, Subhashis Kundu, Mayank Tiwari, Sameeran Pani, Padmapal, Uttam Singh
Abstract:
Blast Furnace is a counter current process where burden descends from top and hot gases ascend from bottom and chemically reduce iron oxides into liquid hot metal. One of the major problems of blast furnace operation is the erratic burden descent inside furnace. Sometimes this problem is so acute that burden descent stops resulting in Hanging and instability of the furnace. This problem is very frequent in blast furnaces worldwide and results in huge production losses. This situation becomes more adverse when blast furnaces are operated at low coke rate and high coal injection rate with adverse raw materials like high alumina ore and high coke ash. For last three years, H-Blast Furnace Tata Steel was able to reduce coke rate from 450 kg/thm to 350 kg/thm with an increase in coal injection to 200 kg/thm which are close to world benchmarks and expand profitability. To sustain this regime, elimination of irregularities of blast furnace like hanging, channeling, and scaffolding is very essential. In this paper, sustaining of zero hanging spell for consecutive three years with low coke rate operation by improvement in burden characteristics, burden distribution, changes in slag regime, casting practices and adequate automation of the furnace operation has been illustrated. Models have been created to comprehend and upgrade the blast furnace process understanding. A model has been developed to predict the process of maintaining slag viscosity in desired range to attain proper burden permeability. A channeling prediction model has also been developed to understand channeling symptoms so that early actions can be initiated. The models have helped to a great extent in standardizing the control decisions of operators at H-Blast Furnace of Tata Steel, Jamshedpur and thus achieving process stability for last three years.Keywords: hanging, channelling, blast furnace, coke
Procedia PDF Downloads 195365 Modelling the Impact of Installation of Heat Cost Allocators in District Heating Systems Using Machine Learning
Authors: Danica Maljkovic, Igor Balen, Bojana Dalbelo Basic
Abstract:
Following the regulation of EU Directive on Energy Efficiency, specifically Article 9, individual metering in district heating systems has to be introduced by the end of 2016. These directions have been implemented in member state’s legal framework, Croatia is one of these states. The directive allows installation of both heat metering devices and heat cost allocators. Mainly due to bad communication and PR, the general public false image was created that the heat cost allocators are devices that save energy. Although this notion is wrong, the aim of this work is to develop a model that would precisely express the influence of installation heat cost allocators on potential energy savings in each unit within multifamily buildings. At the same time, in recent years, a science of machine learning has gain larger application in various fields, as it is proven to give good results in cases where large amounts of data are to be processed with an aim to recognize a pattern and correlation of each of the relevant parameter as well as in the cases where the problem is too complex for a human intelligence to solve. A special method of machine learning, decision tree method, has proven an accuracy of over 92% in prediction general building consumption. In this paper, a machine learning algorithms will be used to isolate the sole impact of installation of heat cost allocators on a single building in multifamily houses connected to district heating systems. Special emphasises will be given regression analysis, logistic regression, support vector machines, decision trees and random forest method.Keywords: district heating, heat cost allocator, energy efficiency, machine learning, decision tree model, regression analysis, logistic regression, support vector machines, decision trees and random forest method
Procedia PDF Downloads 249364 Examining Predictive Coding in the Hierarchy of Visual Perception in the Autism Spectrum Using Fast Periodic Visual Stimulation
Authors: Min L. Stewart, Patrick Johnston
Abstract:
Predictive coding has been proposed as a general explanatory framework for understanding the neural mechanisms of perception. As such, an underweighting of perceptual priors has been hypothesised to underpin a range of differences in inferential and sensory processing in autism spectrum disorders. However, empirical evidence to support this has not been well established. The present study uses an electroencephalography paradigm involving changes of facial identity and person category (actors etc.) to explore how levels of autistic traits (AT) affect predictive coding at multiple stages in the visual processing hierarchy. The study uses a rapid serial presentation of faces, with hierarchically structured sequences involving both periodic and aperiodic repetitions of different stimulus attributes (i.e., person identity and person category) in order to induce contextual expectations relating to these attributes. It investigates two main predictions: (1) significantly larger and late neural responses to change of expected visual sequences in high-relative to low-AT, and (2) significantly reduced neural responses to violations of contextually induced expectation in high- relative to low-AT. Preliminary frequency analysis data comparing high and low-AT show greater and later event-related-potentials (ERPs) in occipitotemporal areas and prefrontal areas in high-AT than in low-AT for periodic changes of facial identity and person category but smaller ERPs over the same areas in response to aperiodic changes of identity and category. The research advances our understanding of how abnormalities in predictive coding might underpin aberrant perceptual experience in autism spectrum. This is the first stage of a research project that will inform clinical practitioners in developing better diagnostic tests and interventions for people with autism.Keywords: hierarchical visual processing, face processing, perceptual hierarchy, prediction error, predictive coding
Procedia PDF Downloads 111363 Numerical Modeling and Prediction of Nanoscale Transport Phenomena in Vertically Aligned Carbon Nanotube Catalyst Layers by the Lattice Boltzmann Simulation
Authors: Seungho Shin, Keunwoo Choi, Ali Akbar, Sukkee Um
Abstract:
In this study, the nanoscale transport properties and catalyst utilization of vertically aligned carbon nanotube (VACNT) catalyst layers are computationally predicted by the three-dimensional lattice Boltzmann simulation based on the quasi-random nanostructural model in pursuance of fuel cell catalyst performance improvement. A series of catalyst layers are randomly generated with statistical significance at the 95% confidence level to reflect the heterogeneity of the catalyst layer nanostructures. The nanoscale gas transport phenomena inside the catalyst layers are simulated by the D3Q19 (i.e., three-dimensional, 19 velocities) lattice Boltzmann method, and the corresponding mass transport characteristics are mathematically modeled in terms of structural properties. Considering the nanoscale reactant transport phenomena, a transport-based effective catalyst utilization factor is defined and statistically analyzed to determine the structure-transport influence on catalyst utilization. The tortuosity of the reactant mass transport path of VACNT catalyst layers is directly calculated from the streaklines. Subsequently, the corresponding effective mass diffusion coefficient is statistically predicted by applying the pre-estimated tortuosity factors to the Knudsen diffusion coefficient in the VACNT catalyst layers. The statistical estimation results clearly indicate that the morphological structures of VACNT catalyst layers reduce the tortuosity of reactant mass transport path when compared to conventional catalyst layer and significantly improve consequential effective mass diffusion coefficient of VACNT catalyst layer. Furthermore, catalyst utilization of the VACNT catalyst layer is substantially improved by enhanced mass diffusion and electric current paths despite the relatively poor interconnections of the ion transport paths.Keywords: Lattice Boltzmann method, nano transport phenomena, polymer electrolyte fuel cells, vertically aligned carbon nanotube
Procedia PDF Downloads 201362 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning
Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan
Abstract:
The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass
Procedia PDF Downloads 115361 Prediction of Seismic Damage Using Scalar Intensity Measures Based on Integration of Spectral Values
Authors: Konstantinos G. Kostinakis, Asimina M. Athanatopoulou
Abstract:
A key issue in seismic risk analysis within the context of Performance-Based Earthquake Engineering is the evaluation of the expected seismic damage of structures under a specific earthquake ground motion. The assessment of the seismic performance strongly depends on the choice of the seismic Intensity Measure (IM), which quantifies the characteristics of a ground motion that are important to the nonlinear structural response. Several conventional IMs of ground motion have been used to estimate their damage potential to structures. Yet, none of them has been proved to be able to predict adequately the seismic damage. Therefore, alternative, scalar intensity measures, which take into account not only ground motion characteristics but also structural information have been proposed. Some of these IMs are based on integration of spectral values over a range of periods, in an attempt to account for the information that the shape of the acceleration, velocity or displacement spectrum provides. The adequacy of a number of these IMs in predicting the structural damage of 3D R/C buildings is investigated in the present paper. The investigated IMs, some of which are structure specific and some are nonstructure-specific, are defined via integration of spectral values. To achieve this purpose three symmetric in plan R/C buildings are studied. The buildings are subjected to 59 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along the structural axes. The response is determined by nonlinear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures are correlated with seven scalar ground motion IMs. The comparative assessment of the results revealed that the structure-specific IMs present higher correlation with the seismic damage of the three buildings. However, the adequacy of the IMs for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.Keywords: damage measures, bidirectional excitation, spectral based IMs, R/C buildings
Procedia PDF Downloads 328360 Bioinformatics Approach to Identify Physicochemical and Structural Properties Associated with Successful Cell-free Protein Synthesis
Authors: Alexander A. Tokmakov
Abstract:
Cell-free protein synthesis is widely used to synthesize recombinant proteins. It allows genome-scale expression of various polypeptides under strictly controlled uniform conditions. However, only a minor fraction of all proteins can be successfully expressed in the systems of protein synthesis that are currently used. The factors determining expression success are poorly understood. At present, the vast volume of data is accumulated in cell-free expression databases. It makes possible comprehensive bioinformatics analysis and identification of multiple features associated with successful cell-free expression. Here, we describe an approach aimed at identification of multiple physicochemical and structural properties of amino acid sequences associated with protein solubility and aggregation and highlight major correlations obtained using this approach. The developed method includes: categorical assessment of the protein expression data, calculation and prediction of multiple properties of expressed amino acid sequences, correlation of the individual properties with the expression scores, and evaluation of statistical significance of the observed correlations. Using this approach, we revealed a number of statistically significant correlations between calculated and predicted features of protein sequences and their amenability to cell-free expression. It was found that some of the features, such as protein pI, hydrophobicity, presence of signal sequences, etc., are mostly related to protein solubility, whereas the others, such as protein length, number of disulfide bonds, content of secondary structure, etc., affect mainly the expression propensity. We also demonstrated that amenability of polypeptide sequences to cell-free expression correlates with the presence of multiple sites of post-translational modifications. The correlations revealed in this study provide a plethora of important insights into protein folding and rationalization of protein production. The developed bioinformatics approach can be of practical use for predicting expression success and optimizing cell-free protein synthesis.Keywords: bioinformatics analysis, cell-free protein synthesis, expression success, optimization, recombinant proteins
Procedia PDF Downloads 419359 Revolutionizing Project Management: A Comprehensive Review of Artificial Intelligence and Machine Learning Applications for Smarter Project Execution
Authors: Wenzheng Fu, Yue Fu, Zhijiang Dong, Yujian Fu
Abstract:
The integration of artificial intelligence (AI) and machine learning (ML) into project management is transforming how engineering projects are executed, monitored, and controlled. This paper provides a comprehensive survey of AI and ML applications in project management, systematically categorizing their use in key areas such as project data analytics, monitoring, tracking, scheduling, and reporting. As project management becomes increasingly data-driven, AI and ML offer powerful tools for improving decision-making, optimizing resource allocation, and predicting risks, leading to enhanced project outcomes. The review highlights recent research that demonstrates the ability of AI and ML to automate routine tasks, provide predictive insights, and support dynamic decision-making, which in turn increases project efficiency and reduces the likelihood of costly delays. This paper also examines the emerging trends and future opportunities in AI-driven project management, such as the growing emphasis on transparency, ethical governance, and data privacy concerns. The research suggests that AI and ML will continue to shape the future of project management by driving further automation and offering intelligent solutions for real-time project control. Additionally, the review underscores the need for ongoing innovation and the development of governance frameworks to ensure responsible AI deployment in project management. The significance of this review lies in its comprehensive analysis of AI and ML’s current contributions to project management, providing valuable insights for both researchers and practitioners. By offering a structured overview of AI applications across various project phases, this paper serves as a guide for the adoption of intelligent systems, helping organizations achieve greater efficiency, adaptability, and resilience in an increasingly complex project management landscape.Keywords: artificial intelligence, decision support systems, machine learning, project management, resource optimization, risk prediction
Procedia PDF Downloads 21358 Chemometric Regression Analysis of Radical Scavenging Ability of Kombucha Fermented Kefir-Like Products
Authors: Strahinja Kovacevic, Milica Karadzic Banjac, Jasmina Vitas, Stefan Vukmanovic, Radomir Malbasa, Lidija Jevric, Sanja Podunavac-Kuzmanovic
Abstract:
The present study deals with chemometric regression analysis of quality parameters and the radical scavenging ability of kombucha fermented kefir-like products obtained with winter savory (WS), peppermint (P), stinging nettle (SN) and wild thyme tea (WT) kombucha inoculums. Each analyzed sample was described by milk fat content (MF, %), total unsaturated fatty acids content (TUFA, %), monounsaturated fatty acids content (MUFA, %), polyunsaturated fatty acids content (PUFA, %), the ability of free radicals scavenging (RSA Dₚₚₕ, % and RSA.ₒₕ, %) and pH values measured after each hour from the start until the end of fermentation. The aim of the conducted regression analysis was to establish chemometric models which can predict the radical scavenging ability (RSA Dₚₚₕ, % and RSA.ₒₕ, %) of the samples by correlating it with the MF, TUFA, MUFA, PUFA and the pH value at the beginning, in the middle and at the end of fermentation process which lasted between 11 and 17 hours, until pH value of 4.5 was reached. The analysis was carried out applying univariate linear (ULR) and multiple linear regression (MLR) methods on the raw data and the data standardized by the min-max normalization method. The obtained models were characterized by very limited prediction power (poor cross-validation parameters) and weak statistical characteristics. Based on the conducted analysis it can be concluded that the resulting radical scavenging ability cannot be precisely predicted only on the basis of MF, TUFA, MUFA, PUFA content, and pH values, however, other quality parameters should be considered and included in the further modeling. This study is based upon work from project: Kombucha beverages production using alternative substrates from the territory of the Autonomous Province of Vojvodina, 142-451-2400/2019-03, supported by Provincial Secretariat for Higher Education and Scientific Research of AP Vojvodina.Keywords: chemometrics, regression analysis, kombucha, quality control
Procedia PDF Downloads 142357 Establishment of Decision Support Center for Managing Natural Hazard Consequence in Kuwait
Authors: Abdullah Alenezi, Mane Alsudrawi, Rafat Misak
Abstract:
Kuwait is faced with a potentially wide and harmful range of both natural and anthropogenic hazardous events such as dust storms, floods, fires, nuclear accidents, earthquakes, oil spills, tsunamis and other disasters. For Kuwait can be highly vulnerable to these complex environmental risks, an up-to-date and in-depth understanding of their typology, genesis, and impact on the Kuwaiti society is needed. Adequate anticipation and management of environmental crises further require a comprehensive system of decision support to the benefit of decision makers to further bridge the gap between (technical) risk understanding and public action. For that purpose, the Kuwait Institute for Scientific Research (KISR), intends to establish a decision support center for management of the environmental crisis in Kuwait. The center will support policy makers, stakeholders and national committees with technical information that helps them efficiently and effectively assess, monitor to manage environmental disasters using decision support tools. These tools will build on state of the art quantification and visualization techniques, such as remote sensing information, Geographical Information Systems (GIS), simulation and prediction models, early warning systems, etc. The center is conceived as a central facility which will be designed, operated and managed by KISR in coordination with national authorities and decision makers of the country. Our vision is that by 2035 the center will be recognized as a leading national source of scientific advice on national risk management in Kuwait and build unity of effort among Kuwaiti’s institutions, government agencies, public and private organizations through provision and sharing of information. The project team now focuses on capacity building through upgrading some KISR facilities manpower development, build strong collaboration with international alliance.Keywords: decision support, environment, hazard, Kuwait
Procedia PDF Downloads 313356 Effect of Steam Explosion of Crop Residues on Chemical Compositions and Efficient Energy Values
Authors: Xin Wu, Yongfeng Zhao, Qingxiang Meng
Abstract:
In China, quite low proportion of crop residues were used as feedstuff because of its poor palatability and low digestibility. Steam explosion is a physical and chemical feed processing technology which has great potential to improve sapidity and digestibility of crop residues. To investigate the effect of the steam explosion on chemical compositions and efficient energy values, crop residues (rice straw, wheat straw and maize stover) were processed by steam explosion (steam temperature 120-230°C, steam pressure 2-26kg/cm², 40min). Steam-exploded crop residues were regarded as treatment groups and untreated ones as control groups, nutritive compositions were analyzed and effective energy values were calculated by prediction model in INRA (1988, 2010) for both groups. Results indicated that the interaction between treatment and variety has a significant effect on chemical compositions of crop residues. Steam explosion treatment of crop residues decreased neutral detergent fiber (NDF) significantly (P < 0.01), and compared with untreated material, NDF content of rice straw, wheat straw, and maize stover lowered 21.46%, 32.11%, 28.34% respectively. Acid detergent lignin (ADL) of crop residues increased significantly after the steam explosion (P < 0.05). The content of crude protein (CP), ether extract (EE) and Ash increased significantly after steam explosion (P < 0.05). Moreover, predicted effective energy values of each steam-exploded residue were higher than that of untreated ones. The digestible energy (DE), metabolizable energy (ME), net energy for maintenance (NEm) and net energy for gain (NEg)of steam-exploded rice straw were 3.06, 2.48, 1.48and 0.29 MJ/kg respectively and increased 46.21%, 46.25%, 49.56% and 110.92% compared with untreated ones(P < 0.05). Correspondingly, the energy values of steam-exploded wheat straw were 2.18, 1.76, 1.03 and 0.15 MJ/kg, which were 261.78%, 261.29%, 274.59% and 1014.69% greater than that of wheat straw (P < 0.05). The above predicted energy values of steam exploded maize stover were 5.28, 4.30, 2.67 and 0.82 MJ/kg and raised 109.58%, 107.71%, 122.57% and 332.64% compared with the raw material(P < 0.05). In conclusion, steam explosion treatment could significantly decrease NDF content, increase ADL, CP, EE, Ash content and effective energy values of crop residues. The effect of steam explosion was much more obvious for wheat straw than the other two kinds of residues under the same condition.Keywords: chemical compositions, crop residues, efficient energy values, steam explosion
Procedia PDF Downloads 250355 Molecular Characterization of Ovine Herpesvirus 2 Strains Based on Selected Glycoprotein and Tegument Genes
Authors: Fulufhelo Amanda Doboro, Kgomotso Sebeko, Stephen Njiro, Moritz Van Vuuren
Abstract:
Ovine herpesvirus 2 (OvHV-2) genome obtained from the lymphopblastoid cell line of a BJ1035 cow was recently sequenced in the United States of America (USA). Information on the sequences of OvHV-2 genes obtained from South African strains from bovine or other African countries and molecular characterization of OvHV-2 is not documented. Present investigation provides information on the nucleotide and derived amino acid sequences and genetic diversity of Ov 7, Ov 8 ex2, ORF 27 and ORF 73 genes, of these genes from OvHV-2 strains circulating in South Africa. Gene-specific primers were designed and used for PCR of DNA extracted from 42 bovine blood samples that previously tested positive for OvHV-2. The expected PCR products of 495 bp, 253 bp, 890 bp and 1632 bp respectively for Ov 7, Ov 8 ex2, ORF 27 and ORF 73 genes were sequenced and multiple sequence analysis done on the selected regions of the sequenced PCR products. Two genotypes for ORF 27 and ORF 73 gene sequences, and three genotypes for Ov 7 and Ov 8 ex2 gene sequences were identified, and similar groupings for the derived amino acid sequences were obtained for each gene. Nucleotide and amino acid sequence variations that led to the identification of the different genotypes included SNPs, deletions and insertions. Sequence analysis of Ov 7 and ORF 27 genes revealed variations that distinguished between sequences from SA and reference OvHV-2 strains. The implication of geographic origin among SA sequences was difficult to evaluate because of random distribution of genotypes in the different provinces, for each gene. However, socio-economic factors such as migration of people with animals, or transportation of animals for agricultural or business use from one province to another are most likely to be responsible for this observation. The sequence variations observed in this study have no impact on the antibody binding activities of glycoproteins encoded by Ov 7, Ov 8 ex2 and ORF 27 genes, as determined by prediction of the presence of B cell epitopes using BepiPred 1.0. The findings of this study will be used for selection of gene candidates for the development of diagnostic assays and vaccine development as well.Keywords: amino acid, genetic diversity, genes, nucleotide
Procedia PDF Downloads 489354 Enamel Structure Defect, the Rare Dental Anomaly: Isolated or Syndromic
Authors: Nehal F. Hassib, Rasha M. El Hossini, Inas M. Sayed, Maha R. Abouzeid, Nermeen A. Bayoumi, Aida M. Mosaad, Lamia K. Gadallah, Moataz Bellah A. T. Abdelbari, Heba A. El-Sayed, Hasnaa Elbendary, Ghada Abdel-Salam, Maha Zaki, Mostafa I. Mostafa, Mohamed S. Abdel-Hamid
Abstract:
Enamel, the outermost layer of the tooth crown, is the hardest dental tissue and serves as a protective barrier. Amelogenesis, the process of enamel formation, is regulated by multiple genes to ensure normal, defect-free enamel. Defective enamel manifests as hypoplasia or as amelogenesis imperfecta (AI), which may occur in isolation or as part of a syndrome. This study presents 29 patients from 18 unrelated families (16 females and 13 males) who exhibited distinctive enamel abnormalities. We conducted thorough clinical examinations and requested laboratory and radiological investigations. Blood samples were collected for molecular analysis, utilizing a targeted panel for known AI variants and whole exome sequencing for unknown variants. Eleven variants linked to enamel anomalies were identified: four genes associated with isolated AI (WDR72, ACP4, SLC24A4, and FAM83H) and seven associated with syndromic forms, including enamel renal syndrome (FAM20A), tricho-dento-osseous syndrome (DLX3), Jalili syndrome (CNNM4), and others linked to neurological and mitochondrial disorders, skeletal dysplasia, and peroxisome disorders. Abnormal oral and dental phenotypes in individuals may indicate serious inherited disorders. Enamel defects have significant implications for aesthetics, function, and patients' psychological well-being. Dental examination, alongside clinical and molecular investigations, is crucial for the accurate diagnosis and prediction of inherited conditions.Keywords: amelogenesis imperfecta, enamel defect, Enamel renal syndrome, DLX3, Jalili syndrome, WDR72, FAM83H, whole exome sequencing
Procedia PDF Downloads 24353 Geomorphometric Analysis of the Hydrologic and Topographic Parameters of the Katsina-Ala Drainage Basin, Benue State, Nigeria
Authors: Oyatayo Kehinde Taofik, Ndabula Christopher
Abstract:
Drainage basins are a central theme in the green economy. The rising challenges in flooding, erosion or sediment transport and sedimentation threaten the green economy. This has led to increasing emphasis on quantitative analysis of drainage basin parameters for better understanding, estimation and prediction of fluvial responses and, thus associated hazards or disasters. This can be achieved through direct measurement, characterization, parameterization, or modeling. This study applied the Remote Sensing and Geographic Information System approach of parameterization and characterization of the morphometric variables of Katsina – Ala basin using a 30 m resolution Shuttle Radar Topographic Mission (SRTM) Digital Elevation Model (DEM). This was complemented with topographic and hydrological maps of Katsina-Ala on a scale of 1:50,000. Linear, areal and relief parameters were characterized. The result of the study shows that Ala and Udene sub-watersheds are 4th and 5th order basins, respectively. The stream network shows a dendritic pattern, indicating homogeneity in texture and a lack of structural control in the study area. Ala and Udene sub-watersheds have the following values for elongation ratio, circularity ratio, form factor and relief ratio: 0.48 / 0.39 / 0.35/ 9.97 and 0.40 / 0.35 / 0.32 / 6.0. They also have the following values for drainage texture and ruggedness index of 0.86 / 0.011 and 1.57 / 0.016. The study concludes that the two sub-watersheds are elongated, suggesting that they are susceptible to erosion and, thus higher sediment load in the river channels, which will dispose the watersheds to higher flood peaks. The study also concludes that the sub-watersheds have a very coarse texture, with good permeability of subsurface materials and infiltration capacity, which significantly recharge the groundwater. The study recommends that efforts should be put in place by the Local and State Governments to reduce the size of paved surfaces in these sub-watersheds by implementing a robust agroforestry program at the grass root level.Keywords: erosion, flood, mitigation, morphometry, watershed
Procedia PDF Downloads 86352 Multi-Stage Optimization of Local Environmental Quality by Comprehensive Computer Simulated Person as Sensor for Air Conditioning Control
Authors: Sung-Jun Yoo, Kazuhide Ito
Abstract:
In this study, a comprehensive computer simulated person (CSP) that integrates computational human model (virtual manikin) and respiratory tract model (virtual airway), was applied for estimation of indoor environmental quality. Moreover, an inclusive prediction method was established by integrating computational fluid dynamics (CFD) analysis with advanced CSP which is combined with physiologically-based pharmacokinetic (PBPK) model, unsteady thermoregulation model for analysis targeting micro-climate around human body and respiratory area with high accuracy. This comprehensive method can estimate not only the contaminant inhalation but also constant interaction in the contaminant transfer between indoor spaces, i.e., a target area for indoor air quality (IAQ) assessment, and respiratory zone for health risk assessment. This study focused on the usage of the CSP as an air/thermal quality sensor in indoors, which means the application of comprehensive model for assessment of IAQ and thermal environmental quality. Demonstrative analysis was performed in order to examine the applicability of the comprehensive model to the heating, ventilation, air conditioning (HVAC) control scheme. CSP was located at the center of the simple model room which has dimension of 3m×3m×3m. Formaldehyde which is generated from floor material was assumed as a target contaminant, and flow field, sensible/latent heat and contaminant transfer analysis in indoor space were conducted by using CFD simulation coupled with CSP. In this analysis, thermal comfort was evaluated by thermoregulatory analysis, and respiratory exposure risks represented by adsorption flux/concentration at airway wall surface were estimated by PBPK-CFD hybrid analysis. These Analysis results concerning IAQ and thermal comfort will be fed back to the HVAC control and could be used to find a suitable ventilation rate and energy requirement for air conditioning system.Keywords: CFD simulation, computer simulated person, HVAC control, indoor environmental quality
Procedia PDF Downloads 361351 Prediction of the Dark Matter Distribution and Fraction in Individual Galaxies Based Solely on Their Rotation Curves
Authors: Ramzi Suleiman
Abstract:
Recently, the author proposed an observationally-based relativity theory termed information relativity theory (IRT). The theory is simple and is based only on basic principles, with no prior axioms and no free parameters. For the case of a body of mass in uniform rectilinear motion relative to an observer, the theory transformations uncovered a matter-dark matter duality, which prescribes that the sum of the densities of the body's baryonic matter and dark matter, as measured by the observer, is equal to the body's matter density at rest. It was shown that the theory transformations were successful in predicting several important phenomena in small particle physics, quantum physics, and cosmology. This paper extends the theory transformations to the cases of rotating disks and spheres. The resulting transformations for a rotating disk are utilized to derive predictions of the radial distributions of matter and dark matter densities in rotationally supported galaxies based solely on their observed rotation curves. It is also shown that for galaxies with flattening curves, good approximations of the radial distributions of matter and dark matter and of the dark matter fraction could be obtained from one measurable scale radius. Test of the model on five galaxies, chosen randomly from the SPARC database, yielded impressive predictions. The rotation curves of all the investigated galaxies emerged as accurate traces of the predicted radial density distributions of their dark matter. This striking result raises an intriguing physical explanation of gravity in galaxies, according to which it is the proximal drag of the stars and gas in the galaxy by its rotating dark matter web. We conclude by alluding briefly to the application of the proposed model to stellar systems and black holes. This study also hints at the potential of the discovered matter-dark matter duality in fixing the standard model of elementary particles in a natural manner without the need for hypothesizing about supersymmetric particles.Keywords: dark matter, galaxies rotation curves, SPARC, rotating disk
Procedia PDF Downloads 78350 Electroencephalography Correlates of Memorability While Viewing Advertising Content
Authors: Victor N. Anisimov, Igor E. Serov, Ksenia M. Kolkova, Natalia V. Galkina
Abstract:
The problem of memorability of the advertising content is closely connected with the key issues of neuromarketing. The memorability of the advertising content contributes to the marketing effectiveness of the promoted product. Significant directions of studying the phenomenon of memorability are the memorability of the brand (detected through the memorability of the logo) and the memorability of the product offer (detected through the memorization of dynamic audiovisual advertising content - commercial). The aim of this work is to reveal the predictors of memorization of static and dynamic audiovisual stimuli (logos and commercials). An important direction of the research was revealing differences in psychophysiological correlates of memorability between static and dynamic audiovisual stimuli. We assumed that static and dynamic images are perceived in different ways and may have a difference in the memorization process. Objective methods of recording psychophysiological parameters while watching static and dynamic audiovisual materials are well suited to achieve the aim. The electroencephalography (EEG) method was performed with the aim of identifying correlates of the memorability of various stimuli in the electrical activity of the cerebral cortex. All stimuli (in the groups of statics and dynamics separately) were divided into 2 groups – remembered and not remembered based on the results of the questioning method. The questionnaires were filled out by survey participants after viewing the stimuli not immediately, but after a time interval (for detecting stimuli recorded through long-term memorization). Using statistical method, we developed the classifier (statistical model) that predicts which group (remembered or not remembered) stimuli gets, based on psychophysiological perception. The result of the statistical model was compared with the results of the questionnaire. Conclusions: Predictors of the memorability of static and dynamic stimuli have been identified, which allows prediction of which stimuli will have a higher probability of remembering. Further developments of this study will be the creation of stimulus memory model with the possibility of recognizing the stimulus as previously seen or new. Thus, in the process of remembering the stimulus, it is planned to take into account the stimulus recognition factor, which is one of the most important tasks for neuromarketing.Keywords: memory, commercials, neuromarketing, EEG, branding
Procedia PDF Downloads 251349 Calculational-Experimental Approach of Radiation Damage Parameters on VVER Equipment Evaluation
Authors: Pavel Borodkin, Nikolay Khrennikov, Azamat Gazetdinov
Abstract:
The problem of ensuring of VVER type reactor equipment integrity is now most actual in connection with justification of safety of the NPP Units and extension of their service life to 60 years and more. First of all, it concerns old units with VVER-440 and VVER-1000. The justification of the VVER equipment integrity depends on the reliability of estimation of the degree of the equipment damage. One of the mandatory requirements, providing the reliability of such estimation, and also evaluation of VVER equipment lifetime, is the monitoring of equipment radiation loading parameters. In this connection, there is a problem of justification of such normative parameters, used for an estimation of the pressure vessel metal embrittlement, as the fluence and fluence rate (FR) of fast neutrons above 0.5 MeV. From the point of view of regulatory practice, a comparison of displacement per atom (DPA) and fast neutron fluence (FNF) above 0.5 MeV has a practical concern. In accordance with the Russian regulatory rules, neutron fluence F(E > 0.5 MeV) is a radiation exposure parameter used in steel embrittlement prediction under neutron irradiation. However, the DPA parameter is a more physically legitimate quantity of neutron damage of Fe based materials. If DPA distribution in reactor structures is more conservative as neutron fluence, this case should attract the attention of the regulatory authority. The purpose of this work was to show what radiation load parameters (fluence, DPA) on all VVER equipment should be under control, and give the reasonable estimations of such parameters in the volume of all equipment. The second task is to give the conservative estimation of each parameter including its uncertainty. Results of recently received investigations allow to test the conservatism of calculational predictions, and, as it has been shown in the paper, combination of ex-vessel measured data with calculated ones allows to assess unpredicted uncertainties which are results of specific unique features of individual equipment for VVER reactor. Some results of calculational-experimental investigations are presented in this paper.Keywords: equipment integrity, fluence, displacement per atom, nuclear power plant, neutron activation measurements, neutron transport calculations
Procedia PDF Downloads 157348 Measurement Technologies for Advanced Characterization of Magnetic Materials Used in Electric Drives and Automotive Applications
Authors: Lukasz Mierczak, Patrick Denke, Piotr Klimczyk, Stefan Siebert
Abstract:
Due to the high complexity of the magnetization in electrical machines and influence of the manufacturing processes on the magnetic properties of their components, the assessment and prediction of hysteresis and eddy current losses has remained a challenge. In the design process of electric motors and generators, the power losses of stators and rotors are calculated based on the material supplier’s data from standard magnetic measurements. This type of data does not include the additional loss from non-sinusoidal multi-harmonic motor excitation nor the detrimental effects of residual stress remaining in the motor laminations after manufacturing processes, such as punching, housing shrink fitting and winding. Moreover, in production, considerable attention is given to the measurements of mechanical dimensions of stator and rotor cores, whereas verification of their magnetic properties is typically neglected, which can lead to inconsistent efficiency of assembled motors. Therefore, to enable a comprehensive characterization of motor materials and components, Brockhaus Measurements developed a range of in-line and offline measurement technologies for testing their magnetic properties under actual motor operating conditions. Multiple sets of experimental data were obtained to evaluate the influence of various factors, such as elevated temperature, applied and residual stress, and arbitrary magnetization on the magnetic properties of different grades of non-oriented steel. Measured power loss for tested samples and stator cores varied significantly, by more than 100%, comparing to standard measurement conditions. Quantitative effects of each of the applied measurement were analyzed. This research and applied Brockhaus measurement methodologies emphasized the requirement for advanced characterization of magnetic materials used in electric drives and automotive applications.Keywords: magnetic materials, measurement technologies, permanent magnets, stator and rotor cores
Procedia PDF Downloads 140347 An Analysis of the Regression Hypothesis from a Shona Broca’s Aphasci Perspective
Authors: Esther Mafunda, Simbarashe Muparangi
Abstract:
The present paper tests the applicability of the Regression Hypothesis on the pathological language dissolution of a Shona male adult with Broca’s aphasia. It particularly assesses the prediction of the Regression Hypothesis, which states that the process according to which language is forgotten will be the reversal of the process according to which it will be acquired. The main aim of the paper is to find out whether mirror symmetries between L1 acquisition and L1 dissolution of tense in Shona and, if so, what might cause these regression patterns. The paper also sought to highlight the practical contributions that Linguistic theory can make to solving language-related problems. Data was collected from a 46-year-old male adult with Broca’s aphasia who was receiving speech therapy at St Giles Rehabilitation Centre in Harare, Zimbabwe. The primary data elicitation method was experimental, using the probe technique. The TART (Test for Assessing Reference Time) Shona version in the form of sequencing pictures was used to access tense by Broca’s aphasic and 3.5-year-old child. Using the SPSS (Statistical Package for Social Studies) and Excel analysis, it was established that the use of the future tense was impaired in Shona Broca’s aphasic whilst the present and past tense was intact. However, though the past tense was intact in the male adult with Broca’s aphasic, a reference to the remote past was made. The use of the future tense was also found to be difficult for the 3,5-year-old speaking child. No difficulties were encountered in using the present and past tenses. This means that mirror symmetries were found between L1 acquisition and L1 dissolution of tense in Shona. On the basis of the results of this research, it can be concluded that the use of tense in a Shona adult with Broca’s aphasia supports the Regression Hypothesis. The findings of this study are important in terms of speech therapy in the context of Zimbabwe. The study also contributes to Bantu linguistics in general and to Shona linguistics in particular. Further studies could also be done focusing on the rest of the Bantu language varieties in terms of aphasia.Keywords: Broca’s Aphasia, regression hypothesis, Shona, language dissolution
Procedia PDF Downloads 96346 Machine Learning Techniques to Predict Cyberbullying and Improve Social Work Interventions
Authors: Oscar E. Cariceo, Claudia V. Casal
Abstract:
Machine learning offers a set of techniques to promote social work interventions and can lead to support decisions of practitioners in order to predict new behaviors based on data produced by the organizations, services agencies, users, clients or individuals. Machine learning techniques include a set of generalizable algorithms that are data-driven, which means that rules and solutions are derived by examining data, based on the patterns that are present within any data set. In other words, the goal of machine learning is teaching computers through 'examples', by training data to test specifics hypothesis and predict what would be a certain outcome, based on a current scenario and improve that experience. Machine learning can be classified into two general categories depending on the nature of the problem that this technique needs to tackle. First, supervised learning involves a dataset that is already known in terms of their output. Supervising learning problems are categorized, into regression problems, which involve a prediction from quantitative variables, using a continuous function; and classification problems, which seek predict results from discrete qualitative variables. For social work research, machine learning generates predictions as a key element to improving social interventions on complex social issues by providing better inference from data and establishing more precise estimated effects, for example in services that seek to improve their outcomes. This paper exposes the results of a classification algorithm to predict cyberbullying among adolescents. Data were retrieved from the National Polyvictimization Survey conducted by the government of Chile in 2017. A logistic regression model was created to predict if an adolescent would experience cyberbullying based on the interaction and behavior of gender, age, grade, type of school, and self-esteem sentiments. The model can predict with an accuracy of 59.8% if an adolescent will suffer cyberbullying. These results can help to promote programs to avoid cyberbullying at schools and improve evidence based practice.Keywords: cyberbullying, evidence based practice, machine learning, social work research
Procedia PDF Downloads 168345 An Investigation into the Crystallization Tendency/Kinetics of Amorphous Active Pharmaceutical Ingredients: A Case Study with Dipyridamole and Cinnarizine
Authors: Shrawan Baghel, Helen Cathcart, Biall J. O'Reilly
Abstract:
Amorphous drug formulations have great potential to enhance solubility and thus bioavailability of BCS class II drugs. However, the higher free energy and molecular mobility of the amorphous form lowers the activation energy barrier for crystallization and thermodynamically drives it towards the crystalline state which makes them unstable. Accurate determination of the crystallization tendency/kinetics is the key to the successful design and development of such systems. In this study, dipyridamole (DPM) and cinnarizine (CNZ) has been selected as model compounds. Thermodynamic fragility (m_T) is measured from the heat capacity change at the glass transition temperature (Tg) whereas dynamic fragility (m_D) is evaluated using methods based on extrapolation of configurational entropy to zero 〖(m〗_(D_CE )), and heating rate dependence of Tg 〖(m〗_(D_Tg)). The mean relaxation time of amorphous drugs was calculated from Vogel-Tammann-Fulcher (VTF) equation. Furthermore, the correlation between fragility and glass forming ability (GFA) of model drugs has been established and the relevance of these parameters to crystallization of amorphous drugs is also assessed. Moreover, the crystallization kinetics of model drugs under isothermal conditions has been studied using Johnson-Mehl-Avrami (JMA) approach to determine the Avrami constant ‘n’ which provides an insight into the mechanism of crystallization. To further probe into the crystallization mechanism, the non-isothermal crystallization kinetics of model systems was also analysed by statistically fitting the crystallization data to 15 different kinetic models and the relevance of model-free kinetic approach has been established. In addition, the crystallization mechanism for DPM and CNZ at each extent of transformation has been predicted. The calculated fragility, glass forming ability (GFA) and crystallization kinetics is found to be in good correlation with the stability prediction of amorphous solid dispersions. Thus, this research work involves a multidisciplinary approach to establish fragility, GFA and crystallization kinetics as stability predictors for amorphous drug formulations.Keywords: amorphous, fragility, glass forming ability, molecular mobility, mean relaxation time, crystallization kinetics, stability
Procedia PDF Downloads 354344 Aerodynamic Optimization of Oblique Biplane by Using Supercritical Airfoil
Authors: Asma Abdullah, Awais Khan, Reem Al-Ghumlasi, Pritam Kumari, Yasir Nawaz
Abstract:
Introduction: This study verified the potential applications of two Oblique Wing configurations that were initiated by the Germans Aerodynamicists during the WWII. Due to the end of the war, this project was not completed and in this research is targeting the revival of German Oblique biplane configuration. The research draws upon the use of two Oblique wings mounted on the top and bottom of the fuselage through a single pivot. The wings are capable of sweeping at different angles ranging from 0° at takeoff to 60° at cruising Altitude. The top wing, right half, behaves like a forward swept wing and the left half, behaves like a backward swept wing. Vice Versa applies to the lower wing. This opposite deflection of the top and lower wing cancel out the rotary moment created by each wing and the aircraft remains stable. Problem to better understand or solve: The purpose of this research is to investigate the potential of achieving improved aerodynamic performance and efficiency of flight at a wide range of sweep angles. This will help examine the most accurate value for the sweep angle at which the aircraft will possess both stability and better aerodynamics. Explaining the methods used: The Aircraft configuration is designed using Solidworks after which a series of Aerodynamic prediction are conducted, both in the subsonic and the supersonic flow regime. Computations are carried on Ansys Fluent. The results are then compared to theoretical and flight data of different Supersonic fighter aircraft of the same category (AD-1) and with the Wind tunnel testing model at subsonic speed. Results: At zero sweep angle, the aircraft has an excellent lift coefficient value with almost double that found for fighter jets. In acquiring of supersonic speed the sweep angle is increased to maximum 60 degrees depending on the mission profile. General findings: Oblique biplane can be the future fighter jet aircraft because of its high value performance in terms of aerodynamics, cost, structural design and weight.Keywords: biplane, oblique wing, sweep angle, supercritical airfoil
Procedia PDF Downloads 278343 A Systematic Review on Development of a Cost Estimation Framework: A Case Study of Nigeria
Authors: Babatunde Dosumu, Obuks Ejohwomu, Akilu Yunusa-Kaltungo
Abstract:
Cost estimation in construction is often difficult, particularly when dealing with risks and uncertainties, which are inevitable and peculiar to developing countries like Nigeria. Direct consequences of these are major deviations in cost, duration, and quality. The fundamental aim of this study is to develop a framework for assessing the impacts of risk on cost estimation, which in turn causes variabilities between contract sum and final account. This is very important, as initial estimates given to clients should reflect the certain magnitude of consistency and accuracy, which the client builds other planning-related activities upon, and also enhance the capabilities of construction industry professionals by enabling better prediction of the final account from the contract sum. In achieving this, a systematic literature review was conducted with cost variability and construction projects as search string within three databases: Scopus, Web of science, and Ebsco (Business source premium), which are further analyzed and gap(s) in knowledge or research discovered. From the extensive review, it was found that factors causing deviation between final accounts and contract sum ranged between 1 and 45. Besides, it was discovered that a cost estimation framework similar to Building Cost Information Services (BCIS) is unavailable in Nigeria, which is a major reason why initial estimates are very often inconsistent, leading to project delay, abandonment, or determination at the expense of the huge sum of money invested. It was concluded that the development of a cost estimation framework that is adjudged an important tool in risk shedding rather than risk-sharing in project risk management would be a panacea to cost estimation problems, leading to cost variability in the Nigerian construction industry by the time this ongoing Ph.D. research is completed. It was recommended that practitioners in the construction industry should always take into account risk in order to facilitate the rapid development of the construction industry in Nigeria, which should give stakeholders a more in-depth understanding of the estimation effectiveness and efficiency to be adopted by stakeholders in both the private and public sectors.Keywords: cost variability, construction projects, future studies, Nigeria
Procedia PDF Downloads 209342 Exploring the Applications of Neural Networks in the Adaptive Learning Environment
Authors: Baladitya Swaika, Rahul Khatry
Abstract:
Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.Keywords: computer adaptive tests, item response theory, machine learning, neural networks
Procedia PDF Downloads 175