Search results for: energy efficiency in Kazakhstan
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12873

Search results for: energy efficiency in Kazakhstan

873 Inertia Friction Pull Plug Welding, a New Weld Repair Technique of Aluminium Friction Stir Welding

Authors: Guoqing Wang, Yanhua Zhao, Lina Zhang, Jingbin Bai, Ruican Zhu

Abstract:

Friction stir welding with bobbin tool is a simple technique compared to conventional FSW since the backing fixture is no longer needed and assembling labor is reduced. It gets adopted more and more in the aerospace industry as a result. However, a post-weld problem, the left keyhole, has to be fixed by forced repair welding. To close the keyhole, the conventional fusion repair could be an option if the joint properties are not deteriorated; friction push plug welding, a forced repair, could be another except that a rigid support unit is demanded at the back of the weldment. Therefore, neither of the above ways is satisfaction in welding a large enclosed structure, like rocket propellant tank. Although friction pulls plug welding does not need a backing plate, the wide applications are still held back because of the disadvantages in respects of unappropriated tensile stress, (i.e. excessive stress causing neck shrinkage of plug that will bring about back defects while insufficient stress causing lack of heat input that will bring about face defects), complicated welding parameters (including rotation speed, transverse speed, friction force, welding pressure and upset),short welding time (approx. 0.5 sec.), narrow windows and poor stability of process. In this research, an updated technique called inertia friction pull plug welding, and its equipment was developed. The influencing rules of technological parameters on joint properties of inertia friction pull plug welding were observed. The microstructure characteristics were analyzed. Based on the elementary performance data acquired, the conclusion is made that the uniform energy provided by an inertia flywheel will be a guarantee to a stable welding process. Meanwhile, due to the abandon of backing plate, the inertia friction pull plug welding is considered as a promising technique in repairing keyhole of bobbin tool FSW and point type defects of aluminium base material.

Keywords: defect repairing, equipment, inertia friction pull plug welding, technological parameters

Procedia PDF Downloads 293
872 Investigating the Neural Heterogeneity of Developmental Dyscalculia

Authors: Fengjuan Wang, Azilawati Jamaludin

Abstract:

Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.

Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity

Procedia PDF Downloads 39
871 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 94
870 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features

Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh

Abstract:

In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.

Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve

Procedia PDF Downloads 239
869 Development of a Sprayable Piezoelectric Material for E-Textile Applications

Authors: K. Yang, Y. Wei, M. Zhang, S. Yong, R. Torah, J. Tudor, S. Beeby

Abstract:

E-textiles are traditional textiles with integrated electronic functionality. It is an emerging innovation with numerous applications in fashion, wearable computing, health and safety monitoring, and the military and medical sectors. The piezoelectric effect is a widespread and versatile transduction mechanism used in sensor and actuator applications. Piezoelectric materials produce electric charge when stressed. Conversely, mechanical deformation occurs when an electric field is applied across the material. Lead Zirconate Titanate (PZT) is a widely used piezoceramic material which has been used to fabricate e-textiles through screen printing, electro spinning and hydrothermal synthesis. This paper explores an alternative fabrication process: Spray coating. Spray coating is a straightforward and cost effective fabrication method applicable on both flat and curved surfaces. It can also be applied selectively by spraying through a stencil which enables the required design to be realised on the substrate. This work developed a sprayable PZT based piezoelectric ink consisting of a binder (Fabink-Binder-01), PZT powder (80 % 2 µm and 20 % 0.8 µm) and acetone as a thinner. The optimised weight ratio of PZT/binder is 10:1. The components were mixed using a SpeedMixer DAC 150. The fabrication processes is as follows: 1) Screen print a UV-curable polyurethane interface layer on the textile to create a smooth textile surface. 2) Spray one layer of a conductive silver polymer ink through a pre-designed stencil and dry at 90 °C for 10 minutes to form the bottom electrode. 3) Spray three layers of the PZT ink through a pre-designed stencil and dry at 90 °C for 10 minutes for each layer to form a total thickness of ~250µm PZT layer. 4) Spray one layer of the silver ink through a pre-designed stencil on top of the PZT layer and dry at 90 °C for 10 minutes to form the top electrode. The domains of the PZT elements were aligned by polarising the material at an elevated temperature under a strong electric field. A d33 of 37 pC/N has been achieved after polarising at 90 °C for 6 minutes with an electric field of 3 MV/m. The application of the piezoelectric textile was demonstrated by fabricating a pressure sensor to switch an LED on/off. Other potential applications on e-textiles include motion sensing, energy harvesting, force sensing and a buzzer.

Keywords: piezoelectric, PZT, spray coating, pressure sensor, e-textile

Procedia PDF Downloads 449
868 Socio Economic Deprivation, Institutional Outlay and the Intent of Mobile Snatching and Street Assaults in Pakistan

Authors: Asad Salahuddin

Abstract:

Crime rates seem to be severely augmenting over the past several years in Pakistan which has perpetuated concerns as to what, when and how this upsurge will be eradicated. State institutions are posed to be in utmost perplexity, given the enormity of worsening law and order situation, compelling government on the flip side to expend more resources in strengthening institutions to confront crime, whereas, the economy has been confronted with massive energy crisis, mass unemployment and considerable inflation which has rendered most of the people into articulate apprehension as to how to satisfy basic necessities. A framework to investigate the variability in the rising street crimes, as affected by social and institutional outcomes, has been established using a cross-sectional study. Questionnaire, entailing 7 sections incorporating numerous patterns of behavior and history of involvement in different crimes for potential street criminals was observed as data collection instrument. In order to specifically explicate the intent of street crimes on micro level, various motivational and de-motivational factors that stimulate people to resort to street crimes were scrutinized. Intent of mobile snatching and intent of street assault as potential dependent variables were examined using numerous variables that influence the occurrence and intent of these crimes using ordered probit along with ordered logit and tobit as competing models. Model Estimates asserts that intent of mobile snatching has been significantly enhanced owing to perceived judicial inefficiency and lower ability of police reforms to operate effectively, which signifies the inefficiency of institutions that are entitled to deliver justice and maintaining law and order respectively. Whereas, intent of street assaults, as an outcome, affirms that people with lack of self-stability and severe childhood punishments were more tempted to be involved in violent acts. Hence, it is imperative for government to render better resources in form of training, equipment and improved salaries to police and judiciary in order to enhance their abilities and potential to curb inflating crime.

Keywords: deprivation, street assault, self control, police reform

Procedia PDF Downloads 409
867 Assessing the Effect of Urban Growth on Land Surface Temperature: A Case Study of Conakry Guinea

Authors: Arafan Traore, Teiji Watanabe

Abstract:

Conakry, the capital city of the Republic of Guinea, has experienced a rapid urban expansion and population increased in the last two decades, which has resulted in remarkable local weather and climate change, raise energy demand and pollution and treating social, economic and environmental development. In this study, the spatiotemporal variation of the land surface temperature (LST) is retrieved to characterize the effect of urban growth on the thermal environment and quantify its relationship with biophysical indices, a normalized difference vegetation index (NDVI) and a normalized difference built up Index (NDBI). Landsat data TM and OLI/TIRS acquired respectively in 1986, 2000 and 2016 were used for LST retrieval and Land use/cover change analysis. A quantitative analysis based on the integration of a remote sensing and a geography information system (GIS) has revealed an important increased in the LST pattern in the average from 25.21°C in 1986 to 27.06°C in 2000 and 29.34°C in 2016, which was quite eminent with an average gain in surface temperature of 4.13°C over 30 years study period. Additionally, an analysis using a Pearson correlation (r) between (LST) and the biophysical indices, normalized difference vegetation index (NDVI) and a normalized difference built-up Index (NDBI) has revealed a negative relationship between LST and NDVI and a strong positive relationship between LST and NDBI. Which implies that an increase in the NDVI value can reduce the LST intensity; conversely increase in NDBI value may strengthen LST intensity in the study area. Although Landsat data were found efficient in assessing the thermal environment in Conakry, however, the method needs to be refined with in situ measurements of LST in the future studies. The results of this study may assist urban planners, scientists and policies makers concerned about climate variability to make decisions that will enhance sustainable environmental practices in Conakry.

Keywords: Conakry, land surface temperature, urban heat island, geography information system, remote sensing, land use/cover change

Procedia PDF Downloads 222
866 Averting a Financial Crisis through Regulation, Including Legislation

Authors: Maria Krambia-Kapardis, Andreas Kapardis

Abstract:

The paper discusses regulatory and legislative measures implemented by various nations in an effort to avert another financial crisis. More specifically, to address the financial crisis, the European Commission followed the practice of other developed countries and implemented a European Economic Recovery Plan in an attempt to overhaul the regulatory and supervisory framework of the financial sector. In 2010 the Commission introduced the European Systemic Risk Board and in 2011 the European System of Financial Supervision. Some experts advocated that the type and extent of financial regulation introduced in the European crisis in the wake of the 2008 crisis has been excessive and counterproductive. In considering how different countries responded to the financial crisis, global regulators have shown a more focused commitment to combat industry misconduct and to pre-empt abusive behavior. Regulators have also increased funding and resources at their disposal; have increased regulatory fines, with an increasing trend towards action against individuals; and, finally, have focused on market abuse and market conduct issues. Financial regulation can be effected, first of all, through legislation. However, neither ex ante or ex post regulation is by itself effective in reducing systemic risk. Consequently, to avert a financial crisis, in their endeavor to achieve both economic efficiency and financial stability, governments need to balance the two approaches to financial regulation. Fiduciary duty is another means by which the behavior of actors in the financial world is constrained and, thus, regulated. Furthermore, fiduciary duties extend over and above other existing requirements set out by statute and/or common law and cover allegations of breach of fiduciary duty, negligence or fraud. Careful analysis of the etiology of the 2008 financial crisis demonstrates the great importance of corporate governance as a way of regulating boardroom behavior. In addition, the regulation of professions including accountants and auditors plays a crucial role as far as the financial management of companies is concerned. In the US, the Sarbanes-Oxley Act of 2002 established the Public Company Accounting Oversight Board in order to protect investors from financial accounting fraud. In most countries around the world, however, accounting regulation consists of a legal framework, international standards, education, and licensure. Accounting regulation is necessary because of the information asymmetry and the conflict of interest that exists between managers and users of financial information. If a holistic approach is to be taken then one cannot ignore the regulation of legislators themselves which can take the form of hard or soft legislation. The science of averting a financial crisis is yet to be perfected and this, as shown by the preceding discussion, is unlikely to be achieved in the foreseeable future as ‘disaster myopia’ may be reduced but will not be eliminated. It is easier, of course, to be wise in hindsight and regulating unreasonably risky decisions and unethical or outright criminal behavior in the financial world remains major challenges for governments, corporations, and professions alike.

Keywords: financial crisis, legislation, regulation, financial regulation

Procedia PDF Downloads 374
865 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 120
864 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures

Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.

Abstract:

Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.

Keywords: CFD, reacting flow, DDT, gas explosion

Procedia PDF Downloads 70
863 Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon

Authors: Arafa A. Alholaisi, Jamal H. Madani, M. A. Alvi

Abstract:

The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range  to  MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.

Keywords: Bhagwat-Gambhir-Patil density, Coulomb modified Glauber model, halo nucleus, optical limit approximation

Procedia PDF Downloads 142
862 Production of High Purity Cellulose Products from Sawdust Waste Material

Authors: Simiksha Balkissoon, Jerome Andrew, Bruce Sithole

Abstract:

Approximately half of the wood processed in the Forestry, Timber, Pulp and Paper (FTPP) sector is accumulated as waste. The concept of a “green economy” encourages industries to employ revolutionary, transformative technologies to eliminate waste generation by exploring the development of new value chains. The transition towards an almost paperless world driven by the rise of digital media has resulted in a decline in traditional paper markets, prompting the FTTP sector to reposition itself and expand its product offerings by unlocking the potential of value-adding opportunities from renewable resources such as wood to generate revenue and mitigate its environmental impact. The production of valuable products from wood waste such as sawdust has been extensively explored in recent years. Wood components such as lignin, cellulose and hemicelluloses, which can be extracted selectively by chemical processing, are suitable candidates for producing numerous high-value products. In this study, a novel approach to produce high-value cellulose products, such as dissolving wood pulp (DWP), from sawdust was developed. DWP is a high purity cellulose product used in several applications such as pharmaceutical, textile, food, paint and coatings industries. The proposed approach demonstrates the potential to eliminate several complex processing stages, such as pulping and bleaching, which are associated with traditional commercial processes to produce high purity cellulose products such as DWP, making it less chemically energy and water-intensive. The developed process followed the path of experimentally designed lab tests evaluating typical processing conditions such as residence time, chemical concentrations, liquid-to-solid ratios and temperature, followed by the application of suitable purification steps. Characterization of the product from the initial stage was conducted using commercially available DWP grades as reference materials. The chemical characteristics of the products thus far have shown similar properties to commercial products, making the proposed process a promising and viable option for the production of DWP from sawdust.

Keywords: biomass, cellulose, chemical treatment, dissolving wood pulp

Procedia PDF Downloads 167
861 An Appraisal of Blended Learning Approach for English Language Teaching in Saudi Arabia

Authors: H. Alqunayeer, S. Zamir

Abstract:

Blended learning, an ideal amalgamation of online learning and face to face traditional approach is a new approach that may result in outstanding outcomes in the realm of teaching and learning. The dexterity and effectiveness offered by e-learning experience cannot be guaranteed in a traditional classroom, whereas one-to-one interaction the essential element of learning that can only be found in a traditional classroom. In recent years, a spectacular expansion in the incorporation of technology in language teaching and learning is observed in many universities of Saudi Arabia. Some universities recognize the importance of blending face-to-face with online instruction in language pedagogy, Qassim University is one of the many universities adopting Blackboard Learning Management system (LMS). The university has adopted this new mode of teaching/learning in year 2015. Although the experience is immature; however great pedagogical transformations are anticipated in the university through this new approach. This paper examines the role of blended language learning with particular reference to the influence of Blackboard Learning Management System on the development of English language learning for EFL learners registered in Bachelors of English language program. This paper aims at exploring three main areas: (i) the present status of Blended learning in the educational process in Saudi Arabia especially in Qassim University by providing a survey report on the number of training courses on Blackboard LMS conducted for the male and female teachers at various colleges of Qassim University, (ii) a survey on teachers perception about the utility, application and the outcome of using blended Learning approach in teaching English language skills courses, (iii) the students’ views on the efficiency of Blended learning approach in learning English language skills courses. Besides, analysis of students’ limitations and challenges related to the experience of blended learning via Blackboard, the suggestion and recommendations offered by the language learners have also been thought-out. The study is empirical in nature. In order to gather data on the afore mentioned areas survey questionnaire method has been used: in order to study students’ perception, a 5 point Likert-scale questionnaire has been distributed to 200 students of English department registered in Bachelors in English program (level 5 through level 8). Teachers’ views have been surveyed with the help of interviewing 25 EFL teachers skilled in using Blackboard LMS in their lectures. In order to ensure the validity and reliability of questionnaire, the inter-rater approach and Cronbach’s Alpha analysis have been used respectively. Analysis of variance (ANOVA) has been used to analyze the students’ perception about the productivity of the Blended approach in learning English language skills. The analysis of feedback by Saudi teachers and students about the usefulness, ingenuity, and productivity of Blended Learning via Blackboard LMS highlights the need of encouraging and expanding the implementation of this new approach into the field of English language teaching in Saudi Arabia, in order to augment congenial learning aura. Furthermore, it is hoped that the propositions and practical suggestions offered by the study will be functional for other similar learning environments.

Keywords: blended learning, black board learning management system, English as foreign language (EFL) learners, EFL teachers

Procedia PDF Downloads 142
860 Sediment Transport Monitoring in the Port of Veracruz Expansion Project

Authors: Francisco Liaño-Carrera, José Isaac Ramírez-Macías, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga, Marcos Rangel-Avalos, Adriana Andrea Roldán-Ubando

Abstract:

The construction of most coastal infrastructure developments around the world are usually made considering wave height, current velocities and river discharges; however, little effort has been paid to surveying sediment transport during dredging or the modification to currents outside the ports or marinas during and after the construction. This study shows a complete survey during the construction of one of the largest ports of the Gulf of Mexico. An anchored Acoustic Doppler Current Velocity profiler (ADCP), a towed ADCP and a combination of model outputs were used at the Veracruz port construction in order to describe the hourly sediment transport and current modifications in and out of the new port. Owing to the stability of the system the new port was construction inside Vergara Bay, a low wave energy system with a tidal range of up to 0.40 m. The results show a two-current system pattern within the bay. The north side of the bay has an anticyclonic gyre, while the southern part of the bay shows a cyclonic gyre. Sediment transport trajectories were made every hour using the anchored ADCP, a numerical model and the weekly data obtained from the towed ADCP within the entire bay. The sediment transport trajectories were carefully tracked since the bay is surrounded by coral reef structures which are sensitive to sedimentation rate and water turbidity. The survey shows that during dredging and rock input used to build the wave breaker sediments were locally added (< 2500 m2) and local currents disperse it in less than 4 h. While the river input located in the middle of the bay and the sewer system plant may add more than 10 times this amount during a rainy day or during the tourist season. Finally, the coastal line obtained seasonally with a drone suggests that the southern part of the bay has not been modified by the construction of the new port located in the northern part of the bay, owing to the two subsystem division of the bay.

Keywords: Acoustic Doppler Current Profiler, construction around coral reefs, dredging, port construction, sediment transport monitoring,

Procedia PDF Downloads 212
859 Stress Concentration and Strength Prediction of Carbon/Epoxy Composites

Authors: Emre Ozaslan, Bulent Acar, Mehmet Ali Guler

Abstract:

Unidirectional composites are very popular structural materials used in aerospace, marine, energy and automotive industries thanks to their superior material properties. However, the mechanical behavior of composite materials is more complicated than isotropic materials because of their anisotropic nature. Also, a stress concentration availability on the structure, like a hole, makes the problem further complicated. Therefore, enormous number of tests require to understand the mechanical behavior and strength of composites which contain stress concentration. Accurate finite element analysis and analytical models enable to understand mechanical behavior and predict the strength of composites without enormous number of tests which cost serious time and money. In this study, unidirectional Carbon/Epoxy composite specimens with central circular hole were investigated in terms of stress concentration factor and strength prediction. The composite specimens which had different specimen wide (W) to hole diameter (D) ratio were tested to investigate the effect of hole size on the stress concentration and strength. Also, specimens which had same specimen wide to hole diameter ratio, but varied sizes were tested to investigate the size effect. Finite element analysis was performed to determine stress concentration factor for all specimen configurations. For quasi-isotropic laminate, it was found that the stress concentration factor increased approximately %15 with decreasing of W/D ratio from 6 to 3. Point stress criteria (PSC), inherent flaw method and progressive failure analysis were compared in terms of predicting the strength of specimens. All methods could predict the strength of specimens with maximum %8 error. PSC was better than other methods for high values of W/D ratio, however, inherent flaw method was successful for low values of W/D. Also, it is seen that increasing by 4 times of the W/D ratio rises the failure strength of composite specimen as %62.4. For constant W/D ratio specimens, all the strength prediction methods were more successful for smaller size specimens than larger ones. Increasing the specimen width and hole diameter together by 2 times reduces the specimen failure strength as %13.2.

Keywords: failure, strength, stress concentration, unidirectional composites

Procedia PDF Downloads 139
858 Potency of Some Dietary Acidifiers on Productive Performance and Controlling Salmonella enteritidis in Broilers

Authors: Mohamed M. Zaki, Maha M. Hady

Abstract:

Salmonella spp. have been categorized as the world’s biggest threats to human health and poultry products are mostly incriminated sources. In Egypt, it was found that S. enteritidis and S. typhimurium are the most prevalent ones in poultry farms. It is recommended to eliminate salmonella from living bird by competing for salmonella contamination in feed in order to establish a healthy gut. The Feed acidifiers are the group of feed additives containing low-molecular-weight organic acids and/ or their salts which act as performance promoters by lowering the pH in the gut, optimizes digestion and inhibit bacterial growth. The inclusion of organic acid in pure form nonetheless effective in feed, yet, it is difficult to handle in feed mills as it is corrosive and produce more losses during pelleting process. The current study aimed at to evaluate the impact of incorporation of sodium diformate (SDF) and a commercial acidifier, CA (a mixture of butyric and propionic acids and their ammonium salts) at 0.4% dietary levels on broilers performance and the control S. enteritidis infection. Two hundreds and seventy unsexed cobb chickens were allotted in one of three treatments (90/ group) which were, the control (no acidifier, C- &C+), the 0.4% SDF (SDF- & SDF +) and the 0.4% CA (CA- & CA +) dietary levels for 35 days. Before the allocation of the groups, ten extra birds and a diet sample were bacteriologically examined to ensure negative contamination with salmonella. The birds were raised on deep-litter separated pens and had free access to feed and water all the time. The experimentally formulated diets were kept at 40C. After 24h access to the different dietary treatments, all the birds in the positive groups (n=15/ replicate) were inoculated intra-crop with 0.2 ml of 24 h broth culture of S. entertidis containing 1X 107 organisms while the negative-treated groups were inoculated with the same amount of the negative broth and second inoculation was done at 22 d of age. Colocal swabs were collected individually from all birds 2 h pre-inoculation to assure the absence of salmonella, then 1, 3, 5, 7, 21 days post-inoculation to recover salmonella. Performance parameter (body weight gain and feed efficiency) were calculated. Mortalities were recorded and reisolation of the salmonella was adopted to ensure it was the inoculated ones. The results revealed that the dietary acidification with sodium diformate significantly improved broilers performance and tends to produce heavier birds as compared to the negative control and CA groups. Moreover, the dietary inclusion of both acidifiers at level of 0.4% was able to eliminate mortalities completely at the relevant inoculation time. Regarding the shedding of S. enteritidius in positive groups, the SDF treatment resulted in significant (p<0.05) cessation of the shedding at 3 days post-inoculation compared to 7 days post-inoculation for the CA-group. In conclusion, sodium diformate at 0.4% dietary level in broiler diets has a valuable effect not only on broilers performance but also by eliminating S. enteritidis the main source of salmonella contamination in poultry farms which is feed.

Keywords: acidifier, broilers, Salmonalla spp, sodium diformate

Procedia PDF Downloads 268
857 Design of a Human-in-the-Loop Aircraft Taxiing Optimisation System Using Autonomous Tow Trucks

Authors: Stefano Zaninotto, Geoffrey Farrugia, Johan Debattista, Jason Gauci

Abstract:

The need to reduce fuel and noise during taxi operations in the airports with a scenario of constantly increasing air traffic has resulted in an effort by the aerospace industry to move towards electric taxiing. In fact, this is one of the problems that is currently being addressed by SESAR JU and two main solutions are being proposed. With the first solution, electric motors are installed in the main (or nose) landing gear of the aircraft. With the second solution, manned or unmanned electric tow trucks are used to tow aircraft from the gate to the runway (or vice-versa). The presence of the tow trucks results in an increase in vehicle traffic inside the airport. Therefore, it is important to design the system in a way that the workload of Air Traffic Control (ATC) is not increased and the system assists ATC in managing all ground operations. The aim of this work is to develop an electric taxiing system, based on the use of autonomous tow trucks, which optimizes aircraft ground operations while keeping ATC in the loop. This system will consist of two components: an optimization tool and a Graphical User Interface (GUI). The optimization tool will be responsible for determining the optimal path for arriving and departing aircraft; allocating a tow truck to each taxiing aircraft; detecting conflicts between aircraft and/or tow trucks; and proposing solutions to resolve any conflicts. There are two main optimization strategies proposed in the literature. With centralized optimization, a central authority coordinates and makes the decision for all ground movements, in order to find a global optimum. With the second strategy, called decentralized optimization or multi-agent system, the decision authority is distributed among several agents. These agents could be the aircraft, the tow trucks, and taxiway or runway intersections. This approach finds local optima; however, it scales better with the number of ground movements and is more robust to external disturbances (such as taxi delays or unscheduled events). The strategy proposed in this work is a hybrid system combining aspects of these two approaches. The GUI will provide information on the movement and status of each aircraft and tow truck, and alert ATC about any impending conflicts. It will also enable ATC to give taxi clearances and to modify the routes proposed by the system. The complete system will be tested via computer simulation of various taxi scenarios at multiple airports, including Malta International Airport, a major international airport, and a fictitious airport. These tests will involve actual Air Traffic Controllers in order to evaluate the GUI and assess the impact of the system on ATC workload and situation awareness. It is expected that the proposed system will increase the efficiency of taxi operations while reducing their environmental impact. Furthermore, it is envisaged that the system will facilitate various controller tasks and improve ATC situation awareness.

Keywords: air traffic control, electric taxiing, autonomous tow trucks, graphical user interface, ground operations, multi-agent, route optimization

Procedia PDF Downloads 110
856 Applying Integrated QFD-MCDM Approach to Strengthen Supply Chain Agility for Mitigating Sustainable Risks

Authors: Enes Caliskan, Hatice Camgoz Akdag

Abstract:

There is no doubt that humanity needs to realize the sustainability problems in the world and take serious action regarding that. All members of the United Nations adopted the 2030 Agenda for Sustainable Development, the most comprehensive study on sustainability internationally, in 2015. The summary of the study is 17 sustainable development goals. It covers everything about sustainability, such as environment, society and governance. The use of Information and Communication Technology (ICT), such as the Internet, mobile phones, and satellites, is essential for tackling the main issues facing sustainable development. Hence, the contributions of 3 major ICT companies to the sustainable development goals are assessed in this study. Quality Function Deployment (QFD) is utilized as a methodology for this study. Since QFD is an excellent instrument for comparing businesses on relevant subjects, a House of Quality must be established to complete the QFD application. In order to develop a House of Quality, the demanded qualities (voice of the customer) and quality characteristics (technical requirements) must first be determined. UN SDGs are used as demanded qualities. Quality characteristics are derived from annual sustainability and corporate social responsibility reports of ICT companies. The companies' efforts, as indicated by the QFD results, are concentrated on the use of recycled raw materials and recycling, reducing GHG emissions through energy saving and improved connectivity, decarbonizing the value chain, protecting the environment and water resources by collaborating with businesses that have completed CDP water assessments and paying attention to reducing water consumption, ethical business practices, and reducing inequality. The evaluations of the three businesses are found to be very similar when they are compared. The small differences between the companies are usually about the region they serve. Efforts made by the companies mostly concentrate on responsible consumption and production, life below water, climate action, and sustainable cities and community goals. These efforts include improving connectivity in needed areas for providing access to information, education and healthcare.

Keywords: multi-criteria decision-making, sustainable supply chain risk, supply chain agility, quality function deployment, Sustainable development goals

Procedia PDF Downloads 18
855 Improving the Digestibility of Agro-Industrial Co-Products by Treatment with Isolated Fungi in the Meknes-Morocco Region

Authors: Mohamed Benaddou, Mohammed Diouri

Abstract:

country, such as Morocco, generates a high quantity of agricultural and food industry residues. A large portion of these residues is disposed of by burning or landfilling. The valorization of this waste biomass as feed is an interesting alternative because it is therefore considered among the best sources of cheap carbohydrates. However, its nutritional yield without any pre-treatment is very low because lignin protects cellulose, the carbohydrate used as a source of energy by ruminants. Fungal treatment is an environmentally friendly, easy and inexpensive method. This study investigated the treatment of wheat straw (WS), cedar sawdust (CS) and olive pomace (OP) with fungi selected according to the source of Carbon for improving its digestibility. Two were selected in a culture medium in which cellulose was the only source of Carbon: Cosmospora Viridescens (C.vir) and Penicillium crustosum (P.crus), two were selected in a culture medium in which lignin is the only source of Carbon: Fusarium oxysporum (F.oxy) and Fusarium sp. (F. Sp), and two in a culture medium where cellulose and lignin are the two sources of Carbon at the same time: Fusarium solani (F. solani) and Penicillium chrysogenum (P.chryso). P.chryso degraded more CS cellulose. It is very important to notice that the delignification by F. Solani reached 70% after 12 weeks of treatment of wheat straw. Ligninase enzymatic was detected in F.solani, F.sp, F.oxysporum, which made it possible to delignify the treated substrates. Delignification by C.vir is negligible in all three substrates after 12 weeks of treatment. P.crus and P.chryso degraded the lignin very slightly in WC (it did not exceed 12% after 12 weeks of treatment) but in OP this delignification is slight reaching 25% and 13% for P.chryso and P.crus successively. P.chryso allowed 30% degradation of lignin from 4 weeks of treatment. The degradation of the lignin was able to reach the maximum within 8 weeks of treatment for most of the fungi except F. solani who continued the treatment after this period. Digestibility variation (IVTD.variation) is highly very significant from fungus to fungi, duration to time, substrate to substrate and its interactions (P <0.001). indeed, all the fungi increased digestibility after 12 weeks of treatment with a difference in the degree of this increase. F.solani and F.oxy increased digestibility more than the others. this digestibility exceeded 50% in CS and O.P but did not exceed 20% for WS after treatment with F.oxy. IVTD.Var was not exceeded 20% in W.S.cedar treated with P.chryso but reached 45% after 8 weeks of treatment in W.straw.

Keywords: lignin, cellulose, digestibility, fungi, treatment, lignocellulosic biomass

Procedia PDF Downloads 192
854 Submicron Laser-Induced Dot, Ripple and Wrinkle Structures and Their Applications

Authors: P. Slepicka, N. Slepickova Kasalkova, I. Michaljanicova, O. Nedela, Z. Kolska, V. Svorcik

Abstract:

Polymers exposed to laser or plasma treatment or modified with different wet methods which enable the introduction of nanoparticles or biologically active species, such as amino-acids, may find many applications both as biocompatible or anti-bacterial materials or on the contrary, can be applied for a decrease in the number of cells on the treated surface which opens application in single cell units. For the experiments, two types of materials were chosen, a representative of non-biodegradable polymers, polyethersulphone (PES) and polyhydroxybutyrate (PHB) as biodegradable material. Exposure of solid substrate to laser well below the ablation threshold can lead to formation of various surface structures. The ripples have a period roughly comparable to the wavelength of the incident laser radiation, and their dimensions depend on many factors, such as chemical composition of the polymer substrate, laser wavelength and the angle of incidence. On the contrary, biopolymers may significantly change their surface roughness and thus influence cell compatibility. The focus was on the surface treatment of PES and PHB by pulse excimer KrF laser with wavelength of 248 nm. The changes of physicochemical properties, surface morphology, surface chemistry and ablation of exposed polymers were studied both for PES and PHB. Several analytical methods involving atomic force microscopy, gravimetry, scanning electron microscopy and others were used for the analysis of the treated surface. It was found that the combination of certain input parameters leads not only to the formation of optimal narrow pattern, but to the combination of a ripple and a wrinkle-like structure, which could be an optimal candidate for cell attachment. The interaction of different types of cells and their interactions with the laser exposed surface were studied. It was found that laser treatment contributes as a major factor for wettability/contact angle change. The combination of optimal laser energy and pulse number was used for the construction of a surface with an anti-cellular response. Due to the simple laser treatment, we were able to prepare a biopolymer surface with higher roughness and thus significantly influence the area of growth of different types of cells (U-2 OS cells).

Keywords: cell response, excimer laser, polymer treatment, periodic pattern, surface morphology

Procedia PDF Downloads 220
853 Experiences of Youth in Learning About Healthy Intimate Relationships: An Institutional Ethnography

Authors: Anum Rafiq

Abstract:

Adolescence is a vulnerable period for youth across the world. It is a period of new learning with opportunities to understand and develop perspectives on health and well-being. With youth beginning to engage in intimate relationships at an earlier age in the 21st century, concentrating on the learning opportunity they have in school is paramount. The nature of what has been deemed important to teach in schools has changed throughout history, and the focus has shifted from home/family skills to teaching youth how to be competitive in the job market. Amidst this emphasis, opportunities for them exist to learn about building healthy intimate relationships, one of the foundational elements of most people’s lives. Using an Institutional Ethnography (IE), the lived experiences of youth in how they understand intimate relationships and how their learning experience is organized through the high school Health and Physical Education (H&PE) course is explored. An empirical inquiry into how the actual work of teachers and youth are socially organized by a biomedical, employment-related, and efficiency-based discourse is provided. Through thirty-two qualitative interviews with teachers and youth, a control of ruling relations such as institutional accountability circuits, performance reports, and timetabling over the experience of teachers and youth is found. One of the facets of the institutional accountability circuit is through the social organization of teaching and learning about healthy intimate relationships being framed through a biomedical discourse. In addition, the role of a hyper-focus on performance and evaluation is found as paramount in situating healthy intimacy discussions as inferior to neoliberally charged productivity measures such as employment skills. Lastly, due to the nature of institutional policies such as regulatory guidelines, teachers are largely influenced to avoid diving into discussions deemed risky or taboo by society, such as healthy intimacy in adolescence. The findings show how texts such as the H&PE curriculum, the Ontario College of Teachers (OCT) guidelines, Ministry of Education Performance Reports, and the timetable organize the day-to-day activities of teachers and students and reproduce different disjunctures for youth. This disjuncture includes some of their experiences being subordinated, difficulty relating to curriculum, and an experience of healthy living discussions being skimmed over across sites. The findings detail that the experience of youth in learning about healthy intimate relationships is not akin to the espoused vision outlined in policy documents such as the H&PE (2015) curriculum policy. These findings have implications for policymakers, activists, and school administration alike, which call for an investigation into who is in power when it comes to youth’s learning needs, as a pivotal period where youth can be equipped with life-changing knowledge is largely underutilized. A restructuring of existing institutional practices that allow for the social and institutional flexibility required to broach the topic of healthy intimacy in a comprehensive manner is required.

Keywords: health policy, intimate relationships, youth, education, ruling relations, sexual education, violence prevention

Procedia PDF Downloads 51
852 Fucoidan: A Potent Seaweed-Derived Polysaccharide with Immunomodulatory and Anti-inflammatory Properties

Authors: Tauseef Ahmad, Muhammad Ishaq, Mathew Eapen, Ahyoung Park, Sam Karpiniec, Vanni Caruso, Rajaraman Eri

Abstract:

Fucoidans are complex, fucose-rich sulfated polymers discovered in brown seaweeds. Fucoidans are popular around the world, particularly in the nutraceutical and pharmaceutical industries, due to their promising medicinal properties. Fucoidans have been shown to have a variety of biological activities, including anti-inflammatory effects. They are known to inhibit inflammatory processes through a variety of mechanisms, including enzyme inhibition and selectin blockade. Inflammation is a part of the complicated biological response of living systems to damaging stimuli, and it plays a role in the pathogenesis of a variety of disorders, including arthritis, inflammatory bowel disease, cancer, and allergies. In the current investigation, various fucoidan extracts from Undaria pinnatifida, Fucus vesiculosus, Macrocystis pyrifera, Ascophyllum nodosum, and Laminaria japonica were assessed for inhibition of pro-inflammatory cytokine production (TNF-α, IL-1β, and IL-6) in LPS induced human macrophage cell line (THP-1) and human peripheral blood mononuclear cells (PBMCs). Furthermore, we also sought to catalogue these extracts based on their anti-inflammatory effects in the current in-vitro cell model. Materials and Methods: To assess the cytotoxicity of fucoidan extracts, MTT (3-[4,5-dimethylthiazol-2-yl]-2,5, -diphenyltetrazolium bromide) cell viability assay was performed. Furthermore, a dose-response for fucoidan extracts was performed in LPS induced THP-1 cells and PBMCs after pre-treatment for 24 hours, and levels of TNF-α, IL-1β, and IL-6 cytokines were measured using Enzyme-Linked Immunosorbent Assay (ELISA). Results: The MTT cell viability assay demonstrated that fucoidan extracts exhibited no evidence of cytotoxicity in THP-1 cells or PBMCs after 48 hours of incubation. The results of the sandwich ELISA revealed that all fucoidan extracts suppressed cytokine production in LPS-stimulated PBMCs and human THP-1 cells in a dose-dependent manner. Notably, at lower concentrations, the lower molecular fucoidan (5-30 kDa) extract from Macrocystis pyrifera was a highly efficient inhibitor of pro-inflammatory cytokines. Fucoidan extracts from all species including Undaria pinnatifida, Fucus vesiculosus, Macrocystis pyrifera, Ascophyllum nodosum, and Laminaria japonica exhibited significant anti-inflammatory effects. These findings on several fucoidan extracts provide insight into strategies for improving their efficacy against inflammation-related diseases. Conclusion: In the current research, we have successfully catalogued several fucoidan extracts based on their efficiency in LPS-induced macrophages and PBMCs in downregulating the key pro-inflammatory cytokines (TNF-, IL-1 and IL-6), which are prospective targets in human inflammatory illnesses. Further research would provide more information on the mechanism of action, allowing it to be tested for therapeutic purposes as an anti-inflammatory medication.

Keywords: fucoidan, PBMCs, THP-1, TNF-α, IL-1β, IL-6, inflammation

Procedia PDF Downloads 43
851 Succinct Perspective on the Implications of Intellectual Property Rights and 3rd Generation Partnership Project in the Rapidly Evolving Telecommunication Industry

Authors: Arnesh Vijay

Abstract:

Ever since its early introduction in the late 1980s, the mobile industry has been rapidly evolving with each passing year. The development witnessed is not just in its ability to support diverse applications, but also its extension into diverse technological means to access and offer various services to users. Amongst the various technologies present, radio systems have clearly emerged as a strong contender, due to its fine attributes of accessibility, reachability, interactiveness, and cost efficiency. These advancements have no doubt guaranteed unprecedented ease, utility and sophistication to the cell phone users, but caused uncertainty due to the interdependence of various systems, making it extremely complicated to exactly map concepts on to 3GPP (3rd Generation Partnership Project) standards. Although the close interrelation and interdependence of intellectual property rights and mobile standard specifications have been widely acknowledged by the technical and legal community; there, however, is a requirement for clear distinction between the scope and future-proof of inventions to influence standards and its market place adoptability. For this, collaborative work is required between intellectual property professionals, researchers, standardization specialists and country specific legal experts. With the evolution into next generation mobile technology, i.e., to 5G systems, there is a need for further work to be done in this field, which has been felt now more than ever before. Based on these lines, this poster will briefly describe the importance of intellectual property rights in the European market. More specifically, will analyse the role played by intellectual property in various standardization institutes, such as 3GPP (3rd generation partnership project) and ITU (International Telecommunications Union). The main intention: to ensure the scope and purpose is well defined, and concerned parties on all four sides are well informed on the clear significance of good proposals which not only bring economic revenue to the company but those that are capable of improving the technology and offer better services to mankind. The poster will comprise different sections. The first segment begins with a background on the rapidly evolving mobile technology, with a brief insight on the industrial impact of standards and its relation to intellectual property rights. Next, section two will succinctly outline the interplay between patents and standards; explicitly discussing the ever changing and rapidly evolving relationship between the two sectors. Then the remaining sections will examine ITU and its role played in international standards development, touching upon the various standardization process and the common patent policies and related guidelines. Finally, it proposes ways to improve the collaboration amongst various sectors for a more evolved and sophisticated next generation mobile telecommunication system. The sole purpose here is to discuss methods to reduce the gap and enhance the exchange of information between the two sectors to offer advanced technologies and services to mankind.

Keywords: mobile technology, mobile standards, intellectual property rights, 3GPP

Procedia PDF Downloads 118
850 Laboratory Assessment of Electrical Vertical Drains in Composite Soils Using Kaolin and Bentonite Clays

Authors: Maher Z. Mohammed, Barry G. Clarke

Abstract:

As an alternative to stone column in fine grained soils, it is possible to create stiffened columns of soils using electroosmosis (electroosmotic piles). This program of this research is to establish the effectiveness and efficiency of the process in different soils. The aim of this study is to assess the capability of electroosmosis treatment in a range of composite soils. The combined electroosmotic and preloading equipment developed by Nizar and Clarke (2013) was used with an octagonal array of anodes surrounding a single cathode in a nominal 250mm diameter 300mm deep cylinder of soil and 80mm anode to cathode distance. Copper coiled springs were used as electrodes to allow the soil to consolidate either due to an external vertical applied load or electroosmosis. The equipment was modified to allow the temperature to be monitored during the test. Electroosmotic tests were performed on China Clay Grade E kaolin and calcium bentonite (Bentonex CB) mixed with sand fraction C (BS 1881 part 131) at different ratios by weight; (0, 23, 33, 50 and 67%) subjected to applied voltages (5, 10, 15 and 20). The soil slurry was prepared by mixing the dry soil with water to 1.5 times the liquid limit of the soil mixture. The mineralogical and geotechnical properties of the tested soils were measured before the electroosmosis treatment began. In the electroosmosis cell tests, the settlement, expelled water, variation of electrical current and applied voltage, and the generated heat was monitored during the test time for 24 osmotic tests. Water content was measured at the end of each test. The electroosmotic tests are divided into three phases. In Phase 1, 15 kPa was applied to simulate a working platform and produce a uniform soil which had been deposited as a slurry. 50 kPa was used in Phase 3 to simulate a surcharge load. The electroosmotic treatment was only performed during Phase 2 where a constant voltage was applied through the electrodes in addition to the 15 kPa pressure. This phase was stopped when no further water was expelled from the cell, indicating the electroosmotic process had stopped due to either the degradation of the anode or the flow due to the hydraulic gradient exactly balanced the electroosmotic flow resulting in no flow. Control tests for each soil mixture were carried out to assess the behaviour of the soil samples subjected to only an increase of vertical pressure, which is 15kPa in Phase 1 and 50kPa in Phase 3. Analysis of the experimental results from this study showed a significant dewatering effect on the soil slurries. The water discharged by the electroosmotic treatment process decreased as the sand content increased. Soil temperature increased significantly when electrical power was applied and drops when applied DC power turned off or when the electrode degraded. The highest increase in temperature was found in pure clays at higher applied voltage after about 8 hours of electroosmosis test.

Keywords: electrokinetic treatment, electrical conductivity, electroosmotic consolidation, electroosmosis permeability ratio

Procedia PDF Downloads 146
849 Diagnostic Contribution of the MMSE-2:EV in the Detection and Monitoring of the Cognitive Impairment: Case Studies

Authors: Cornelia-Eugenia Munteanu

Abstract:

The goal of this paper is to present the diagnostic contribution that the screening instrument, Mini-Mental State Examination-2: Expanded Version (MMSE-2:EV), brings in detecting the cognitive impairment or in monitoring the progress of degenerative disorders. The diagnostic signification is underlined by the interpretation of the MMSE-2:EV scores, resulted from the test application to patients with mild and major neurocognitive disorders. The original MMSE is one of the most widely used screening tools for detecting the cognitive impairment, in clinical settings, but also in the field of neurocognitive research. Now, the practitioners and researchers are turning their attention to the MMSE-2. To enhance its clinical utility, the new instrument was enriched and reorganized in three versions (MMSE-2:BV, MMSE-2:SV and MMSE-2:EV), each with two forms: blue and red. The MMSE-2 was adapted and used successfully in Romania since 2013. The cases were selected from current practice, in order to cover vast and significant neurocognitive pathology: mild cognitive impairment, Alzheimer’s disease, vascular dementia, mixed dementia, Parkinson’s disease, conversion of the mild cognitive impairment into Alzheimer’s disease. The MMSE-2:EV version was used: it was applied one month after the initial assessment, three months after the first reevaluation and then every six months, alternating the blue and red forms. Correlated with age and educational level, the raw scores were converted in T scores and then, with the mean and the standard deviation, the z scores were calculated. The differences of raw scores between the evaluations were analyzed from the point of view of statistic signification, in order to establish the progression in time of the disease. The results indicated that the psycho-diagnostic approach for the evaluation of the cognitive impairment with MMSE-2:EV is safe and the application interval is optimal. The alternation of the forms prevents the learning phenomenon. The diagnostic accuracy and efficient therapeutic conduct derive from the usage of the national test norms. In clinical settings with a large flux of patients, the application of the MMSE-2:EV is a safe and fast psycho-diagnostic solution. The clinicians can draw objective decisions and for the patients: it doesn’t take too much time and energy, it doesn’t bother them and it doesn’t force them to travel frequently.

Keywords: MMSE-2, dementia, cognitive impairment, neuropsychology

Procedia PDF Downloads 494
848 Numerical Simulation of Encased Composite Column Bases Subjected to Cyclic Loading

Authors: Eman Ismail, Adnan Masri

Abstract:

Energy dissipation in ductile moment frames occurs mainly through plastic hinge rotations in its members (beams and columns). Generally, plastic hinge locations are pre-determined and limited to the beam ends, where columns are designed to remain elastic in order to avoid premature instability (aka story mechanisms) with the exception of column bases, where a base is 'fixed' in order to provide higher stiffness and stability and to form a plastic hinge. Plastic hinging at steel column bases in ductile moment frames using conventional base connection details is accompanied by several complications (thicker and heavily stiffened connections, larger embedment depths, thicker foundation to accommodate anchor rod embedment, etc.). An encased composite base connection is proposed where a segment of the column beginning at the base up to a certain point along its height is encased in reinforced concrete with headed shear studs welded to the column flanges used to connect the column to the concrete encasement. When the connection is flexurally loaded, stresses are transferred to a reinforced concrete encasement through the headed shear studs, and thereby transferred to the foundation by reinforced concrete mechanics, and axial column forces are transferred through the base-plate assembly. Horizontal base reactions are expected to be transferred by the direct bearing of the outer and inner faces of the flanges; however, investigation of this mechanism is not within the scope of this research. The inelastic and cyclic behavior of the connection will be investigated where it will be subjected to reversed cyclic loading, and rotational ductility will be observed in cases of yielding mechanisms where yielding occurs as flexural yielding in the beam-column, shear yielding in headed studs, and flexural yielding of the reinforced concrete encasement. The findings of this research show that the connection is capable of achieving satisfactory levels of ductility in certain conditions given proper detailing and proportioning of elements.

Keywords: seismic design, plastic mechanisms steel structure, moment frame, composite construction

Procedia PDF Downloads 110
847 Development of a Coupled Thermal-Mechanical-Biological Model to Simulate Impacts of Temperature on Waste Stabilization at a Landfill in Quebec, Canada

Authors: Simran Kaur, Paul J. Van Geel

Abstract:

A coupled Thermal-Mechanical-Biological (TMB) model was developed for the analysis of impacts of temperatures on waste stabilization at a Municipal Solid Waste (MSW) landfill in Quebec, Canada using COMSOL Multiphysics, a finite element-based software. For waste placed in landfills in Northern climates during winter months, it can take months or even years before the waste approaches ideal temperatures for biodegradation to occur. Therefore, the proposed model links biodegradation induced strain in MSW to waste temperatures and corresponding heat generation rates as a result of anaerobic degradation. This provides a link between the thermal-biological and mechanical behavior of MSW. The thermal properties of MSW are further linked to density which is tracked and updated in the mechanical component of the model, providing a mechanical-thermal link. The settlement of MSW is modelled based on the concept of viscoelasticity. The specific viscoelastic model used is a single Kelvin – Voight viscoelastic body in which the finite element response is controlled by the elastic material parameters – Young’s Modulus and Poisson’s ratio. The numerical model was validated with 10 years of temperature and settlement data collected from a landfill in Ste. Sophie, Quebec. The coupled TMB modelling framework, which simulates placement of waste lifts as they are placed progressively in the landfill, allows for optimization of several thermal and mechanical parameters throughout the depth of the waste profile and helps in better understanding of temperature dependence of MSW stabilization. The model is able to illustrate how waste placed in the winter months can delay biodegradation-induced settlement and generation of landfill gas. A delay in waste stabilization will impact the utilization of the approved airspace prior to the placement of a final cover and impact post-closure maintenance. The model provides a valuable tool to assess different waste placement strategies in order to increase airspace utilization within landfills operating under different climates, in addition to understanding conditions for increased gas generation for recovery as a green and renewable energy source.

Keywords: coupled model, finite element modeling, landfill, municipal solid waste, waste stabilization

Procedia PDF Downloads 114
846 Flexible Integration of Airbag Weakening Lines in Interior Components: Airbag Weakening with Jenoptik Laser Technology

Authors: Markus Remm, Sebastian Dienert

Abstract:

Vehicle interiors are not only changing in terms of design and functionality but also due to new driving situations in which, for example, autonomous operating modes are possible. Flexible seating positions are changing the requirements for passive safety system behavior and location in the interior of a vehicle. With fully autonomous driving, the driver can, for example, leave the position behind the steering wheel and take a seated position facing backward. Since autonomous and non-autonomous vehicles will share the same road network for the foreseeable future, accidents cannot be avoided, which makes the use of passive safety systems indispensable. With JENOPTIK-VOTAN® A technology, the trend towards flexible predetermined airbag weakening lines is enabled. With the help of laser beams, the predetermined weakening lines are introduced from the backside of the components so that they are absolutely invisible. This machining process is sensor-controlled and guarantees that a small residual wall thickness remains for the best quality and reliability for airbag weakening lines. Due to the wide processing range of the laser, the processing of almost all materials is possible. A CO₂ laser is used for many plastics, natural fiber materials, foams, foils and material composites. A femtosecond laser is used for natural materials and textiles that are very heat-sensitive. This laser type has extremely short laser pulses with very high energy densities. Supported by a high-precision and fast movement of the laser beam by a laser scanner system, the so-called cold ablation is enabled to predetermine weakening lines layer by layer until the desired residual wall thickness remains. In that way, for example, genuine leather can be processed in a material-friendly and process-reliable manner without design implications to the components A-Side. Passive safety in the vehicle is increased through the interaction of modern airbag technology and high-precision laser airbag weakening. The JENOPTIK-VOTAN® A product family has been representing this for more than 25 years and is pointing the way to the future with new and innovative technologies.

Keywords: design freedom, interior material processing, laser technology, passive safety

Procedia PDF Downloads 102
845 Organic Geochemical Evaluation of the Ecca Group Shale: Implications for Hydrocarbon Potential

Authors: Temitope L. Baiyegunhi, Kuiwu Liu, Oswald Gwavava, Christopher Baiyegunhi

Abstract:

Shale gas has recently been the exploration focus for future energy resource in South Africa. Specifically, the black shales of the lower Ecca Group in the study area are considered to be one of the most prospective targets for shale gas exploration. Evaluation of this potential resource has been restricted due to the lack of exploration and scarcity of existing drill core data. Thus, only limited previous geochemical data exist for these formations. In this study, outcrop and core samples of the Ecca Group were analysed to assess their total organic carbon (TOC), organic matter type, thermal maturity and hydrocarbon generation potential (SP). The results show that these rocks have TOC ranging from 0.11 to 7.35 wt.%. The SP values vary from 0.09 to 0.53 mg HC/g, suggesting poor hydrocarbon generative potential. The plot of S1 versus TOC shows that the source rocks were characterized by autochthonous hydrocarbons. S2/S3 values range between 0.40 and 7.5, indicating Type- II/III, III, and IV kerogen. With the exception of one sample from the collingham formation which has HI value of 53 mg HC/g TOC, all other samples have HI values of less than 50 mg HC/g TOC, thus suggesting Type-IV kerogen, which is mostly derived from reworked organic matter (mainly dead carbon) with little or no potential for hydrocarbon generation. Tmax values range from 318 to 601℃, indicating immature to over-maturity of hydrocarbon. The vitrinite reflectance values range from 2.22 to 3.93%, indicating over-maturity of the kerogen. Binary plots of HI against OI and HI versus Tmax show that the shales are of Type II and mixed Type II-III kerogen, which are capable of generating both natural gas and minor oil at suitable burial depth. Based on the geochemical data, it can be inferred that the source rocks are immature to over-matured variable from localities and have potential of producing wet to dry gas at present-stage. Generally, the Whitehill formation of the Ecca Group is comparable to the Marcellus and Barnett Shales. This further supports the assumption that the Whitehill Formation has a high probability of being a profitable shale gas play, but only when explored in dolerite-free area and away from the Cape Fold Belt.

Keywords: source rock, organic matter type, thermal maturity, hydrocarbon generation potential, Ecca Group

Procedia PDF Downloads 122
844 Probabilistic Life Cycle Assessment of the Nano Membrane Toilet

Authors: A. Anastasopoulou, A. Kolios, T. Somorin, A. Sowale, Y. Jiang, B. Fidalgo, A. Parker, L. Williams, M. Collins, E. J. McAdam, S. Tyrrel

Abstract:

Developing countries are nowadays confronted with great challenges related to domestic sanitation services in view of the imminent water scarcity. Contemporary sanitation technologies established in these countries are likely to pose health risks unless waste management standards are followed properly. This paper provides a solution to sustainable sanitation with the development of an innovative toilet system, called Nano Membrane Toilet (NMT), which has been developed by Cranfield University and sponsored by the Bill & Melinda Gates Foundation. The particular technology converts human faeces into energy through gasification and provides treated wastewater from urine through membrane filtration. In order to evaluate the environmental profile of the NMT system, a deterministic life cycle assessment (LCA) has been conducted in SimaPro software employing the Ecoinvent v3.3 database. The particular study has determined the most contributory factors to the environmental footprint of the NMT system. However, as sensitivity analysis has identified certain critical operating parameters for the robustness of the LCA results, adopting a stochastic approach to the Life Cycle Inventory (LCI) will comprehensively capture the input data uncertainty and enhance the credibility of the LCA outcome. For that purpose, Monte Carlo simulations, in combination with an artificial neural network (ANN) model, have been conducted for the input parameters of raw material, produced electricity, NOX emissions, amount of ash and transportation of fertilizer. The given analysis has provided the distribution and the confidence intervals of the selected impact categories and, in turn, more credible conclusions are drawn on the respective LCIA (Life Cycle Impact Assessment) profile of NMT system. Last but not least, the specific study will also yield essential insights into the methodological framework that can be adopted in the environmental impact assessment of other complex engineering systems subject to a high level of input data uncertainty.

Keywords: sanitation systems, nano-membrane toilet, lca, stochastic uncertainty analysis, Monte Carlo simulations, artificial neural network

Procedia PDF Downloads 208