Search results for: shadow volumes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 465

Search results for: shadow volumes

285 Applications of Nonlinear Models to Measure and Predict Thermo Physical Properties of Binary Liquid Mixtures1, 4 Dioxane with Bromo Benzene at Various Temperatures

Authors: R. Ramesh, M. Y. M. Yunus, K. Ramesh

Abstract:

The study conducted in this research are Viscosities, η, and Densities ,ρ, of 1, 4-dioxane with Bromobenzene at different mole fractions and various temperatures in the atmospheric pressure condition. From experimentations excess volumes, VE, and deviations in viscosities, Δη, of mixtures at infinite dilutions have been obtained. The measured systems exhibited positive values of VmE and negative values of Δη. The binary mixture 1, 4 dioxane + Bromobenzene show positive VE and negative Δη with increasing temperatures. The outcomes clearly indicate that weak interactions present in mixture. It is mainly because of number and position of methyl groups exist in these aromatic hydrocarbons. These measured data tailored to the nonlinear models to derive the binary coefficients. Standard deviations have been considered between the fitted outcomes and the calculated data is helpful deliberate mixing behavior of the binary mixtures. It can conclude that in our cases, the data found with the values correlated by the corresponding models very well. The molecular interactions existing between the components and comparison of liquid mixtures were also discussed.

Keywords: 1, 4 dioxane, bromobenzene, density, excess molar volume

Procedia PDF Downloads 381
284 Age-Dependent Anatomical Abnormalities of the Amygdala in Autism Spectrum Disorder and their Implications for Altered Socio-Emotional Development

Authors: Gabriele Barrocas, Habon Issa

Abstract:

The amygdala is one of various brain regions that tend to be pathological in individuals with autism spectrum disorder (ASD). ASD is a prevalent and heterogeneous developmental disorder affecting all ethnic and socioeconomic groups and consists of a broad range of severity, etiology, and behavioral symptoms. Common features of ASD include but are not limited to repetitive behaviors, obsessive interests, and anxiety. Neuroscientists view the amygdala as the core of the neural system that regulates behavioral responses to anxiogenic and threatening stimuli. Despite this consensus, many previous studies and literature reviews on the amygdala’s alterations in individuals with ASD have reported inconsistent findings. In this review, we will address these conflicts by highlighting recent studies which reveal that anatomical and related socio-emotional differences detected between individuals with and without ASD are highly age-dependent. We will specifically discuss studies using functional magnetic resonance imaging (fMRI), structural MRI, and diffusion tensor imaging (DTI) to provide insights into the neuroanatomical substrates of ASD across development, with a focus on amygdala volumes, cell densities, and connectivity.

Keywords: autism, amygdala, development, abnormalities

Procedia PDF Downloads 83
283 Employing Visual Culture to Enhance Initial Adult Maltese Language Acquisition

Authors: Jacqueline Żammit

Abstract:

Recent research indicates that the utilization of right-brain strategies holds significant implications for the acquisition of language skills. Nevertheless, the utilization of visual culture as a means to stimulate these strategies and amplify language retention among adults engaging in second language (L2) learning remains a relatively unexplored area. This investigation delves into the impact of visual culture on activating right-brain processes during the initial stages of language acquisition, particularly in the context of teaching Maltese as a second language (ML2) to adult learners. By employing a qualitative research approach, this study convenes a focus group comprising twenty-seven educators to delve into a range of visual culture techniques integrated within language instruction. The collected data is subjected to thematic analysis using NVivo software. The findings underscore a variety of impactful visual culture techniques, encompassing activities such as drawing, sketching, interactive matching games, orthographic mapping, memory palace strategies, wordless picture books, picture-centered learning methodologies, infographics, Face Memory Game, Spot the Difference, Word Search Puzzles, the Hidden Object Game, educational videos, the Shadow Matching technique, Find the Differences exercises, and color-coded methodologies. These identified techniques hold potential for application within ML2 classes for adult learners. Consequently, this study not only provides insights into optimizing language learning through specific visual culture strategies but also furnishes practical recommendations for enhancing language competencies and skills.

Keywords: visual culture, right-brain strategies, second language acquisition, maltese as a second language, visual aids, language-based activities

Procedia PDF Downloads 28
282 Optimum Design of Support and Care Home for the Elderly

Authors: P. Shahabi

Abstract:

The increase in average human life expectancy has led to a growing elderly population. This demographic shift has brought forth various challenges related to the mental and physical well-being of the elderly, often resulting in a lack of dignity and respect for this valuable segment of society. These emerging social issues have cast a shadow on the lives of families, prompting the need for innovative solutions to enhance the lives of the elderly. In this study, within the context of architecture, we aim to create a pleasant and nurturing environment that combines traditional Iranian and modern architectural elements to cater to the unique needs of the elderly. Our primary research objectives encompass the following: Recognizing the societal demand for nursing homes due to the increasing elderly population, addressing the need for a conducive environment that promotes physical and mental well-being among the elderly, developing spatial designs that are specifically tailored to the elderly population, ensuring their comfort and convenience. To achieve these objectives, we have undertaken a comprehensive exploration of the challenges and issues faced by the elderly. We have also laid the groundwork for the architectural design of nursing homes, culminating in the presentation of an architectural plan aimed at minimizing the difficulties faced by the elderly and enhancing their quality of life. It is noteworthy that many of the existing nursing homes in Iran lack the necessary welfare and safety conditions required for the elderly. Hence, our research aims to establish comprehensive and suitable criteria for the optimal design of nursing homes. We believe that through optimal design, we can create spaces that are not only diverse, attractive, and dynamic but also significantly improve the quality of life for the elderly. We hold the hope that these homes will serve as beacons of hope and tranquility for all individuals in their later years.

Keywords: care home, elderly, optimum design, support

Procedia PDF Downloads 38
281 A Model of Foam Density Prediction for Expanded Perlite Composites

Authors: M. Arifuzzaman, H. S. Kim

Abstract:

Multiple sets of variables associated with expanded perlite particle consolidation in foam manufacturing were analyzed to develop a model for predicting perlite foam density. The consolidation of perlite particles based on the flotation method and compaction involves numerous variables leading to the final perlite foam density. The variables include binder content, compaction ratio, perlite particle size, various perlite particle densities and porosities, and various volumes of perlite at different stages of process. The developed model was found to be useful not only for prediction of foam density but also for optimization between compaction ratio and binder content to achieve a desired density. Experimental verification was conducted using a range of foam densities (0.15–0.5 g/cm3) produced with a range of compaction ratios (1.5-3.5), a range of sodium silicate contents (0.05–0.35 g/ml) in dilution, a range of expanded perlite particle sizes (1-4 mm), and various perlite densities (such as skeletal, material, bulk, and envelope densities). A close agreement between predictions and experimental results was found.

Keywords: expanded perlite, flotation method, foam density, model, prediction, sodium silicate

Procedia PDF Downloads 374
280 Characteristics of Different Volumes of Waste Cellular Concrete Powder-Cement Paste for Sustainable Construction

Authors: Mohammed Abed, Rita Nemes

Abstract:

Cellular concrete powder (CCP) is not used widely as supplementary cementitious material, but in the literature, its efficiency is proved when it used as a replacement of cement in concrete mixtures. In this study, different amounts of raw CCP (CCP as a waste material without any industrial modification) will be used to investigate the characteristics of cement pastes and the effects of CCP on the properties of the cement pastes. It is an attempt to produce green binder paste, which is useful for sustainable construction applications. The fresh and hardened properties of a number of CCP blended cement paste will be tested in different life periods, and the optimized CCP volume will be reported with more significant investigations on durability properties. Different replacing of mass percentage (low and high) of the cement mass will be conducted (0%, 10%, 15%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, and 90%). The consistency, flexural strength, and compressive strength will be the base indicator for the further properties' investigations. The CCP replacement until 50% have been tested until 7 days, and the initial results showed a linear relationship between strength and the percentage of the replacement; that is an optimistic indicator for further replacement percentages of waste CCP.

Keywords: cellular concrete powder, supplementary cementitious material, sustainable construction, green concrete

Procedia PDF Downloads 289
279 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank

Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh

Abstract:

Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we  also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.

Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI

Procedia PDF Downloads 86
278 Mechanical Characterization and Impact Study on the Environment of Raw Sediments and Sediments Dehydrated by Addition of Polymer

Authors: A. Kasmi, N. E. Abriak, M. Benzerzour, I. Shahrour

Abstract:

Large volumes of river sediments are dredged each year in Europe in order to maintain harbour activities and prevent floods. The management of this sediment has become increasingly complex. Several European projects were implemented to find environmentally sound solutions for these materials. The main objective of this study is to show the ability of river sediment to be used in road. Since sediments contain a high amount of water, then a dehydrating treatment by addition of the flocculation aid has been used. Firstly, a lot of physical characteristics are measured and discussed for a better identification of the raw sediment and this dehydrated sediment by addition the flocculation aid. The identified parameters are, for example, the initial water content, the density, the organic matter content, the grain size distribution, the liquid limit and plastic limit and geotechnical parameters. The environmental impacts of the used material were evaluated. The results obtained show that there is a slight change on the physical-chemical and geotechnical characteristics of sediment after dehydration by the addition of polymer. However, these sediments cannot be used in road construction.

Keywords: rive sediment, dehydration, flocculation aid or polymer, characteristics, treatments, valorisation, road construction

Procedia PDF Downloads 349
277 Comparative Assessment of a Distributed Model and a Lumped Model for Estimating of Sediments Yielding in Small Urban Areas

Authors: J.Zambrano Nájera, M.Gómez Valentín

Abstract:

Increases in urbanization during XX century, have brought as one major problem the increased of sediment production. Hydraulic erosion is one of the major causes of increasing of sediments in small urban catchments. Such increments in sediment yielding in header urban catchments can caused obstruction of drainage systems, making impossible to capture urban runoff, increasing runoff volumes and thus exacerbating problems of urban flooding. For these reasons, it is more and more important to study of sediment production in urban watershed for properly analyze and solve problems associated to sediments. The study of sediments production has improved with the use of mathematical modeling. For that reason, it is proposed a new physically based model applicable to small header urban watersheds that includes the advantages of distributed physically base models, but with more realistic data requirements. Additionally, in this paper the model proposed is compared with a lumped model, reviewing the results, the advantages and disadvantages between the both of them.

Keywords: erosion, hydrologic modeling, urban runoff, sediment modeling, sediment yielding, urban planning

Procedia PDF Downloads 318
276 Development of a Thermodynamic Model for Ladle Metallurgy Steel Making Processes Using Factsage and Its Macro Facility

Authors: Prasenjit Singha, Ajay Kumar Shukla

Abstract:

To produce high-quality steel in larger volumes, dynamic control of composition and temperature throughout the process is essential. In this paper, we developed a mass transfer model based on thermodynamics to simulate the ladle metallurgy steel-making process using FactSage and its macro facility. The overall heat and mass transfer processes consist of one equilibrium chamber, two non-equilibrium chambers, and one adiabatic reactor. The flow of material, as well as heat transfer, occurs across four interconnected unit chambers and a reactor. We used the macro programming facility of FactSage™ software to understand the thermochemical model of the secondary steel making process. In our model, we varied the oxygen content during the process and studied their effect on the composition of the final hot metal and slag. The model has been validated with respect to the plant data for the steel composition, which is similar to the ladle metallurgy steel-making process in the industry. The resulting composition profile serves as a guiding tool to optimize the process of ladle metallurgy in steel-making industries.

Keywords: desulphurization, degassing, factsage, reactor

Procedia PDF Downloads 177
275 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm

Authors: Ameur Abdelkader, Abed Bouarfa Hafida

Abstract:

Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.

Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm

Procedia PDF Downloads 110
274 Understanding the Role of Gas Hydrate Morphology on the Producibility of a Hydrate-Bearing Reservoir

Authors: David Lall, Vikram Vishal, P. G. Ranjith

Abstract:

Numerical modeling of gas production from hydrate-bearing reservoirs requires the solution of various thermal, hydrological, chemical, and mechanical phenomena in a coupled manner. Among the various reservoir properties that influence gas production estimates, the distribution of permeability across the domain is one of the most crucial parameters since it determines both heat transfer and mass transfer. The aspect of permeability in hydrate-bearing reservoirs is particularly complex compared to conventional reservoirs since it depends on the saturation of gas hydrates and hence, is dynamic during production. The dependence of permeability on hydrate saturation is mathematically represented using permeability-reduction models, which are specific to the expected morphology of hydrate accumulations (such as grain-coating or pore-filling hydrates). In this study, we demonstrate the impact of various permeability-reduction models, and consequently, different morphologies of hydrate deposits on the estimates of gas production using depressurization at the reservoir scale. We observe significant differences in produced water volumes and cumulative mass of produced gas between the models, thereby highlighting the uncertainty in production behavior arising from the ambiguity in the prevalent gas hydrate morphology.

Keywords: gas hydrate morphology, multi-scale modeling, THMC, fluid flow in porous media

Procedia PDF Downloads 191
273 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 214
272 Graph Based Traffic Analysis and Delay Prediction Using a Custom Built Dataset

Authors: Gabriele Borg, Alexei Debono, Charlie Abela

Abstract:

There on a constant rise in the availability of high volumes of data gathered from multiple sources, resulting in an abundance of unprocessed information that can be used to monitor patterns and trends in user behaviour. Similarly, year after year, Malta is also constantly experiencing ongoing population growth and an increase in mobilization demand. This research takes advantage of data which is continuously being sourced and converting it into useful information related to the traffic problem on the Maltese roads. The scope of this paper is to provide a methodology to create a custom dataset (MalTra - Malta Traffic) compiled from multiple participants from various locations across the island to identify the most common routes taken to expose the main areas of activity. This use of big data is seen being used in various technologies and is referred to as ITSs (Intelligent Transportation Systems), which has been concluded that there is significant potential in utilising such sources of data on a nationwide scale. Furthermore, a series of traffic prediction graph neural network models are conducted to compare MalTra to large-scale traffic datasets.

Keywords: graph neural networks, traffic management, big data, mobile data patterns

Procedia PDF Downloads 90
271 Quantitative Analysis of Presence, Consciousness, Subconsciousness, and Unconsciousness

Authors: Hooshmand Kalayeh

Abstract:

The human brain consists of reptilian, mammalian, and thinking brain. And mind consists of conscious, subconscious, and unconscious parallel neural-net programs. The primary objective of this paper is to propose a methodology for quantitative analysis of neural-nets associated with these mental activities in the neocortex. The secondary objective of this paper is to suggest a methodology for quantitative analysis of presence; the proposed methodologies can be used as a first-step to measure, monitor, and understand consciousness and presence. This methodology is based on Neural-Networks (NN), number of neuron in each NN associated with consciousness, subconsciouness, and unconsciousness, and number of neurons in neocortex. It is assumed that the number of neurons in each NN is correlated with the associated area and volume. Therefore, online and offline visualization techniques can be used to identify these neural-networks, and online and offline measurement methods can be used to measure areas and volumes associated with these NNs. So, instead of the number of neurons in each NN, the associated area or volume also can be used in the proposed methodology. This quantitative analysis and associated online and offline measurements and visualizations of different Neural-Networks enable us to rewire the connections in our brain for a more balanced living.

Keywords: brain, mind, consciousness, presence, sub-consciousness, unconsciousness, skills, concentrations, attention

Procedia PDF Downloads 282
270 Cerebrovascular Modeling: A Vessel Network Approach for Fluid Distribution

Authors: Karla E. Sanchez-Cazares, Kim H. Parker, Jennifer H. Tweedy

Abstract:

The purpose of this work is to develop a simple compartmental model of cerebral fluid balance including blood and cerebrospinal-fluid (CSF). At the first level the cerebral arteries and veins are modelled as bifurcating trees with constant scaling factors between generations which are connected through a homogeneous microcirculation. The arteries and veins are assumed to be non-rigid and the cross-sectional area, resistance and mean pressure in each generation are determined as a function of blood volume flow rate. From the mean pressure and further assumptions about the variation of wall permeability, the transmural fluid flux can be calculated. The results suggest the next level of modelling where the cerebral vasculature is divided into three compartments; the large arteries, the small arteries, the capillaries and the veins with effective compliances and permeabilities derived from the detailed vascular model. These vascular compartments are then linked to other compartments describing the different CSF spaces, the cerebral ventricles and the subarachnoid space. This compartmental model is used to calculate the distribution of fluid in the cranium. Known volumes and flows for normal conditions are used to determine reasonable parameters for the model, which can then be used to help understand pathological behaviour and suggest clinical interventions.

Keywords: cerebrovascular, compartmental model, CSF model, vascular network

Procedia PDF Downloads 249
269 Multi-Objective Electric Vehicle Charge Coordination for Economic Network Management under Uncertainty

Authors: Ridoy Das, Myriam Neaimeh, Yue Wang, Ghanim Putrus

Abstract:

Electric vehicles are a popular transportation medium renowned for potential environmental benefits. However, large and uncontrolled charging volumes can impact distribution networks negatively. Smart charging is widely recognized as an efficient solution to achieve both improved renewable energy integration and grid relief. Nevertheless, different decision-makers may pursue diverse and conflicting objectives. In this context, this paper proposes a multi-objective optimization framework to control electric vehicle charging to achieve both energy cost reduction and peak shaving. A weighted-sum method is developed due to its intuitiveness and efficiency. Monte Carlo simulations are implemented to investigate the impact of uncertain electric vehicle driving patterns and provide decision-makers with a robust outcome in terms of prospective cost and network loading. The results demonstrate that there is a conflict between energy cost efficiency and peak shaving, with the decision-makers needing to make a collaborative decision.

Keywords: electric vehicles, multi-objective optimization, uncertainty, mixed integer linear programming

Procedia PDF Downloads 140
268 Analysing the Renewable Energy Integration Paradigm in the Post-COVID-19 Era: An Examination of the Upcoming Energy Law of China

Authors: Lan Wu

Abstract:

The declared transformation towards a ‘new electricity system dominated by renewable energy’ by China requires a cleaner electricity consumption mix with high shares of renewable energy sourced-electricity (RES-E). Unfortunately, integration of RES-E into Chinese electricity markets remains a problem pending more robust legal support, evidenced by the curtailment of wind and solar power as a consequence of integration constraints. The upcoming energy law of the PRC (energy law) is expected to provide such long-awaiting support and coordinate the existing diverse sector-specific laws to deal with the weak implementation that dampening the delivery of their desired regulatory effects. However, in the shadow of the COVID-19 crisis, it remains uncertain how this new energy law brings synergies to RES-E integration, mindful of the significant impacts of the pandemic. Through the theoretical lens of the interplay between China’s electricity reform and legislative development, the present paper investigates whether there is a paradigm shift in energy law regarding renewable energy integration compared with the existing sector-specific energy laws. It examines the 2020 draft for comments on the energy law and analyses its relationship with sector-specific energy laws focusing on RES-E integration. The comparison is drawn upon five key aspects of the RES-E integration issue, including the status of renewables, marketisation, incentive schemes, consumption mechanisms, access to power grids, and dispatching. The analysis shows that it is reasonable to expect a more open and well-organized electricity market enabling absorption of high shares of RES-E. The present paper concludes that a period of prosperous development of RES-E in the post-COVID-19 era can be anticipated with the legal support by the upcoming energy law. It contributes to understanding the signals China is sending regarding the transition towards a cleaner energy future.

Keywords: energy law, energy transition, electricity market reform, renewable energy integration

Procedia PDF Downloads 165
267 Statistical Manufacturing Cell/Process Qualification Sample Size Optimization

Authors: Angad Arora

Abstract:

In production operations/manufacturing, a cell or line is typically a bunch of similar machines (computer numerical control (CNCs), advanced cutting, 3D printing or special purpose machines. For qualifying a typical manufacturing line /cell / new process, Ideally, we need a sample of parts that can be flown through the process and then we make a judgment on the health of the line/cell. However, with huge volumes and mass production scope, such as in the mobile phone industry, for example, the actual cells or lines can go in thousands and to qualify each one of them with statistical confidence means utilizing samples that are very large and eventually add to product /manufacturing cost + huge waste if the parts are not intended to be customer shipped. To solve this, we come up with 2 steps statistical approach. We start with a small sample size and then objectively evaluate whether the process needs additional samples or not. For example, if a process is producing bad parts and we saw those samples early, then there is a high chance that the process will not meet the desired yield and there is no point in keeping adding more samples. We used this hypothesis and came up with 2 steps binomial testing approach. Further, we also prove through results that we can achieve an 18-25% reduction in samples while keeping the same statistical confidence.

Keywords: statistics, data science, manufacturing process qualification, production planning

Procedia PDF Downloads 62
266 Pharmaceutical Scale up for Solid Dosage Forms

Authors: A. Shashank Tiwari, S. P. Mahapatra

Abstract:

Scale-up is defined as the process of increasing batch size. Scale-up of a process viewed as a procedure for applying the same process to different output volumes. There is a subtle difference between these two definitions: batch size enlargement does not always translate into a size increase of the processing volume. In mixing applications, scale-up is indeed concerned with increasing the linear dimensions from the laboratory to the plant size. On the other hand, processes exist (e.g., tableting) where the term ‘scale-up’ simply means enlarging the output by increasing the speed. To complete the picture, one should point out special procedures where an increase of the scale is counterproductive and ‘scale-down’ is required to improve the quality of the product. In moving from Research and Development (R&D) to production scale, it is sometimes essential to have an intermediate batch scale. This is achieved at the so-called pilot scale, which is defined as the manufacturing of drug product by a procedure fully representative of and simulating that used for full manufacturing scale. This scale also makes it possible to produce enough products for clinical testing and to manufacture samples for marketing. However, inserting an intermediate step between R&D and production scales does not, in itself, guarantee a smooth transition. A well-defined process may generate a perfect product both in the laboratory and the pilot plant and then fail quality assurance tests in production.

Keywords: scale up, research, size, batch

Procedia PDF Downloads 373
265 Training Burnout and Leisure Participation of Athletes in College

Authors: An-Hsu Chen

Abstract:

The study intends to explore how the athletic trainings (12 hours per day, four days per week) have impacts on athlete burnout and their leisure participations. The connection between athlete burnout and leisure participation of collegiate athletes is also discussed. Athlete burnout and leisure participation questionnaire were administrated and 186 valid responses were collected. The data were analyzed with descriptive statistics, t-test, one-way ANOVA, Pearson product-moment correlation coefficient. Results suggest that athlete burnout among collegiate athletes with different specialties are significant distinct. Participants who train more days per week are more likely to participate in entertainment activities while those who have higher training hours per day tend to avoid knowledge-based activities. The research also finds there is a significant positive correlation between athlete burnout and leisure participation of collegiate athletes while sport devaluation is negatively correlated with sport activities in leisure participation. Hence, adjust and well-arrange training quality and quantity may help to avoid over-trainings. Away trainings, uploading training volumes, and group leisure activities are suggested to be arranged properly to allow athletes cope with the burnout and stress caused by long-term trainings and periodical competitions.

Keywords: emotional and physical exhaustion, leisure activities, sport devaluation, training hours

Procedia PDF Downloads 300
264 A Probabilistic Theory of the Buy-Low and Sell-High for Algorithmic Trading

Authors: Peter Shi

Abstract:

Algorithmic trading is a rapidly expanding domain within quantitative finance, constituting a substantial portion of trading volumes in the US financial market. The demand for rigorous and robust mathematical theories underpinning these trading algorithms is ever-growing. In this study, the author establishes a new stock market model that integrates the Efficient Market Hypothesis and the statistical arbitrage. The model, for the first time, finds probabilistic relations between the rational price and the market price in terms of the conditional expectation. The theory consequently leads to a mathematical justification of the old market adage: buy-low and sell-high. The thresholds for “low” and “high” are precisely derived using a max-min operation on Bayes’s error. This explicit connection harmonizes the Efficient Market Hypothesis and Statistical Arbitrage, demonstrating their compatibility in explaining market dynamics. The amalgamation represents a pioneering contribution to quantitative finance. The study culminates in comprehensive numerical tests using historical market data, affirming that the “buy-low” and “sell-high” algorithm derived from this theory significantly outperforms the general market over the long term in four out of six distinct market environments.

Keywords: efficient market hypothesis, behavioral finance, Bayes' decision, algorithmic trading, risk control, stock market

Procedia PDF Downloads 39
263 Achieving Shear Wave Elastography by a Three-element Probe for Wearable Human-machine Interface

Authors: Jipeng Yan, Xingchen Yang, Xiaowei Zhou, Mengxing Tang, Honghai Liu

Abstract:

Shear elastic modulus of skeletal muscles can be obtained by shear wave elastography (SWE) and has been linearly related to muscle force. However, SWE is currently implemented using array probes. Price and volumes of these probes and their driving equipment prevent SWE from being used in wearable human-machine interfaces (HMI). Moreover, beamforming processing for array probes reduces the real-time performance. To achieve SWE by wearable HMIs, a customized three-element probe is adopted in this work, with one element for acoustic radiation force generation and the others for shear wave tracking. In-phase quadrature demodulation and 2D autocorrelation are adopted to estimate velocities of tissues on the sound beams of the latter two elements. Shear wave speeds are calculated by phase shift between the tissue velocities. Three agar phantoms with different elasticities were made by changing the weights of agar. Values of the shear elastic modulus of the phantoms were measured as 8.98, 23.06 and 36.74 kPa at a depth of 7.5 mm respectively. This work verifies the feasibility of measuring shear elastic modulus by wearable devices.

Keywords: shear elastic modulus, skeletal muscle, ultrasound, wearable human-machine interface

Procedia PDF Downloads 114
262 Integrated Flavor Sensor Using Microbead Array

Authors: Ziba Omidi, Min-Ki Kim

Abstract:

This research presents the design, fabrication and application of a flavor sensor for an integrated electronic tongue and electronic nose that can allow rapid characterization of multi-component mixtures in a solution. The odor gas and liquid are separated using hydrophobic porous membrane in micro fluidic channel. The sensor uses an array composed of microbeads in micromachined cavities localized on silicon wafer. Sensing occurs via colorimetric and fluorescence changes to receptors and indicator molecules that are attached to termination sites on the polymeric microbeads. As a result, the sensor array system enables simultaneous and near-real-time analyses using small samples and reagent volumes with the capacity to incorporate significant redundancies. One of the key parts of the system is a passive pump driven only by capillary force. The hydrophilic surface of the fluidic structure draws the sample into the sensor array without any moving mechanical parts. Since there is no moving mechanical component in the structure, the size of the fluidic structure can be compact and the fabrication becomes simple when compared to the device including active microfluidic components. These factors should make the proposed system inexpensive to mass-produce, portable and compatible with biomedical applications.

Keywords: optical sensor, semiconductor manufacturing, smell sensor, taste sensor

Procedia PDF Downloads 407
261 Economic Evaluation of Bowland Shale Gas Wells Development in the UK

Authors: Elijah Acquah-Andoh

Abstract:

The UK has had its fair share of the shale gas revolutionary waves blowing across the global oil and gas industry at present. Although, its exploitation is widely agreed to have been delayed, shale gas was looked upon favorably by the UK Parliament when they recognized it as genuine energy source and granted licenses to industry to search and extract the resource. This, although a significant progress by industry, there yet remains another test the UK fracking resource must pass in order to render shale gas extraction feasible – it must be economically extractible and sustainably so. Developing unconventional resources is much more expensive and risky, and for shale gas wells, producing in commercial volumes is conditional upon drilling horizontal wells and hydraulic fracturing, techniques which increase CAPEX. Meanwhile, investment in shale gas development projects is sensitive to gas price and technical and geological risks. Using a Two-Factor Model, the economics of the Bowland shale wells were analyzed and the operational conditions under which fracking is profitable in the UK was characterized. We find that there is a great degree of flexibility about Opex spending; hence Opex does not pose much threat to the fracking industry in the UK. However, we discover Bowland shale gas wells fail to add value at gas price of $8/ Mmbtu. A minimum gas price of $12/Mmbtu at Opex of no more than $2/ Mcf and no more than $14.95M Capex are required to create value within the present petroleum tax regime, in the UK fracking industry.

Keywords: capex, economical, investment, profitability, shale gas development, sustainable

Procedia PDF Downloads 548
260 Use of the Gas Chromatography Method for Hydrocarbons' Quality Evaluation in the Offshore Fields of the Baltic Sea

Authors: Pavel Shcherban, Vlad Golovanov

Abstract:

Currently, there is an active geological exploration and development of the subsoil shelf of the Kaliningrad region. To carry out a comprehensive and accurate assessment of the volumes and degree of extraction of hydrocarbons from open deposits, it is necessary to establish not only a number of geological and lithological characteristics of the structures under study, but also to determine the oil quality, its viscosity, density, fractional composition as accurately as possible. In terms of considered works, gas chromatography is one of the most capacious methods that allow the rapid formation of a significant amount of initial data. The aspects of the application of the gas chromatography method for determining the chemical characteristics of the hydrocarbons of the Kaliningrad shelf fields are observed in the article, as well as the correlation-regression analysis of these parameters in comparison with the previously obtained chemical characteristics of hydrocarbon deposits located on the land of the region. In the process of research, a number of methods of mathematical statistics and computer processing of large data sets have been applied, which makes it possible to evaluate the identity of the deposits, to specify the amount of reserves and to make a number of assumptions about the genesis of the hydrocarbons under analysis.

Keywords: computer processing of large databases, correlation-regression analysis, hydrocarbon deposits, method of gas chromatography

Procedia PDF Downloads 125
259 Revolutionizing RNA Extraction: A Unified, Sustainable, and Rapid Protocol for High-Quality Isolation from Diverse Tissues

Authors: Ying Qi Chan, Chunyu Li, Xu Rou Yoyo Ma, Yaya Li, Saber Khederzadeh

Abstract:

In the ever-evolving landscape of genome extraction protocols, the existing methodologies grapple with issues ranging from sub-optimal yields and compromised quality to time-intensive procedures and reliance on hazardous reagents, often necessitating substantial tissue quantities. This predicament is particularly challenging for scientists in developing countries, where resources are limited. Our investigation presents a protocol for the efficient extraction of high-yield RNA from various tissues such as muscle, insect, and plant samples. Noteworthy for its advantages, our protocol stands out as the safest, swiftest (completed in just 38 minutes), most cost-effective (coming in at a mere US$0.017), and highly efficient method in comparison to existing protocols. Notably, our method avoids the use of hazardous or toxic chemicals such as chloroform and phenol and enzymatic agents like RNase and Proteinase K. Our RNA extraction protocol has demonstrated clear advantages over other methods, including commercial kits, in terms of yield. This nucleic acid extraction protocol is more environmentally and research-friendly, suitable for a range of tissues, even in tiny volumes, hence facilitating various genetic diagnosis and researches across the globe.

Keywords: RNA extraction, rapid protocol, universal method, diverse tissues

Procedia PDF Downloads 38
258 Investigation of the Addition of Macro and Micro Polypropylene Fibers on Mechanical Properties of Concrete Pavement

Authors: Seyed Javad Vaziri Kang Olyaei, Asma Sadat Dabiri, Hassan Fazaeli, Amir Ali Amini

Abstract:

Cracks in concrete pavements are places for the entrance of water and corrosive substances to the pavement, which can reduce the durability of concrete in the long term as well as the serviceability of road. The use of fibers in concrete pavement is one of the effective methods to control and mitigate cracking. This study investigates the effect of the addition of micro and macro polypropylene fibers in different types and volumes and also in combination with the mechanical properties of concrete used in concrete pavements, including compressive strength, splitting tensile strength, modulus of rupture, and average residual strength. The fibers included micro-polypropylene, macro-polypropylene, and hybrid micro and micro polypropylene in different percentages. The results showed that macro polypropylene has the most significant effect on improving the mechanical properties of concrete. Also, the hybrid micro and macro polypropylene fibers increase the mechanical properties of concrete more. It was observed that according to the results of the average residual strength, macro polypropylene fibers alone and together with micro polypropylene fibers could have excellent performance in controlling the sudden formation of cracks and their growth after the formation of cracking which is an essential property in concrete pavements.

Keywords: concrete pavement, mechanical properties, macro polypropylene fibers, micro polypropylene fibers

Procedia PDF Downloads 118
257 Impacts of Climate Change on Water Resources of Greater Zab and Lesser Zab Basins, Iraq, Using Soil and Water Assessment Tool Model

Authors: Nahlah Abbas, Saleh A. Wasimi, Nadhir Al-Ansari

Abstract:

The Greater Zab and Lesser Zab are the major tributaries of Tigris River contributing the largest flow volumes into the river. The impacts of climate change on water resources in these basins have not been well addressed. To gain a better understanding of the effects of climate change on water resources of the study area in near future (2049-2069) as well as in distant future (2080-2099), Soil and Water Assessment Tool (SWAT) was applied. The model was first calibrated for the period from 1979 to 2004 to test its suitability in describing the hydrological processes in the basins. The SWAT model showed a good performance in simulating streamflow. The calibrated model was then used to evaluate the impacts of climate change on water resources. Six general circulation models (GCMs) from phase five of the Coupled Model Intercomparison Project (CMIP5) under three Representative Concentration Pathways (RCPs) RCP 2.6, RCP 4.5, and RCP 8.5 for periods of 2049-2069 and 2080-2099 were used to project the climate change impacts on these basins. The results demonstrated a significant decline in water resources availability in the future.

Keywords: Tigris River, climate change, water resources, SWAT

Procedia PDF Downloads 167
256 The Effect of Iron Deficiency on the Magnetic Properties of Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ M-Type Hexaferrites

Authors: Kang-Hyuk Lee, Wei Yan, Sang-Im Yoo

Abstract:

Recently, Ca₁₋ₓLaₓFe₁₂O₁₉ (Ca-La M-type) hexaferrites have been reported to possess higher crystalline anisotropy compared with SrFe₁₂O₁₉ (Sr M-type) hexaferrite without reducing its saturation magnetization (Ms), resulting in higher coercivity (Hc). While iron deficiency is known to be helpful for the growth and the formation of NiZn spinel ferrites, the effect of iron deficiency in Ca-La M-type hexaferrites has never been reported yet. In this study, therefore, we tried to investigate the effect of iron deficiency on the magnetic properties of Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ hexaferrites prepared by solid state reaction. As-calcined powder was pressed into pellets and sintered at 1275~1325℃ for 4 h in air. Samples were characterized by powder X-ray diffraction (XRD), vibrating sample magnetometer (VSM), and scanning electron microscope (SEM). Powder XRD analyses revealed that Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ (0.75 ≦ y ≦ 2.15) ferrites calcined at 1250-1300℃ for 12 h in air were composed of single phase without the second phases. With increasing the iron deficiency, y, the lattice parameters a, c and unite cell volumes were decreased first up to y=10.25 and then increased again. The highest Ms value of 77.5 emu/g was obtainable from the sample of Ca₀.₅La₀.₅Fe₁₂₋yO₁₉₋δ sintered at 1300℃ for 4 h in air. Detailed microstructures and magnetic properties of Ca-La M-type hexagonal ferrites will be presented for a discussion

Keywords: Ca-La M-type hexaferrite, magnetic properties, iron deficiency, hexaferrite

Procedia PDF Downloads 425