Search results for: Max-Min Ant System
10270 An Ergonomic Evaluation of Three Load Carriage Systems for Reducing Muscle Activity of Trunk and Lower Extremities during Giant Puppet Performing Tasks
Authors: Cathy SW. Chow, Kristina Shin, Faming Wang, B. C. L. So
Abstract:
During some dynamic giant puppet performances, an ergonomically designed load carrier system is necessary for the puppeteers to carry a giant puppet body’s heavy load with minimum muscle stress. A load carrier (i.e. prototype) was designed with two small wheels on the foot; and a hybrid spring device on the knee in order to assist the sliding and knee bending movements respectively. Thus, the purpose of this study was to evaluate the effect of three load carriers including two other commercially available load mounting systems, Tepex and SuitX, and the prototype. Ten male participants were recruited for the experiment. Surface electromyography (sEMG) was used to collect the participants’ muscle activities during forward moving and bouncing and with and without load of 11.1 kg that was 60 cm above the shoulder. Five bilateral muscles including the lumbar erector spinae (LES), rectus femoris (RF), bicep femoris (BF), tibialis anterior (TA), and gastrocnemius (GM) were selected for data collection. During forward moving task, the sEMG data showed smallest muscle activities by Tepex harness which exhibited consistently the lowest, compared with the prototype and SuitX which were significantly higher on left LES 68.99% and 64.99%, right LES 26.57% and 82.45%; left RF 87.71% and 47.61%, right RF 143.57% and 24.28%; left BF 80.21% and 22.23%, right BF 96.02% and 21.83%; right TA 6.32% and 4.47%; left GM 5.89% and 12.35% respectively. The result above reflected mobility was highly restricted by tested exoskeleton devices. On the other hand, the sEMG data from bouncing task showed the smallest muscle activities by prototype which exhibited consistently the lowest, compared with the Tepex harness and SuitX which were significantly lower on lLES 6.65% and 104.93, rLES 23.56% and 92.19%; lBF 33.21% and 93.26% and rBF 24.70% and 81.16%; lTA 46.51% and 191.02%; rTA 12.75% and 125.76%; IGM 31.54% and 68.36%; rGM 95.95% and 96.43% respectively.Keywords: exoskeleton, giant puppet performers, load carriage system, surface electromyography
Procedia PDF Downloads 10710269 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations
Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso
Abstract:
Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.Keywords: pipeline, leakage, detection, AI
Procedia PDF Downloads 19110268 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network
Authors: Abdulaziz Alsadhan, Naveed Khan
Abstract:
In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion Detection System (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw data set for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. These optimal feature subset used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.Keywords: Particle Swarm Optimization (PSO), Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP)
Procedia PDF Downloads 36710267 Exploring Methods for Urbanization of 'Village in City' in China: A Case Study of Hangzhou
Abstract:
After the economic reform in 1978, the urbanization in China has grown fast. It urged cities to expand in an unprecedented high speed. Villages around were annexed unprepared, and it turned out to be a new type of community called 'village in city.' Two things happened here. First, the locals gave up farming and turned to secondary industry and tertiary industry, as a result of losing their land. Secondly, attracted by the high income in cities and low rent here, plenty of migrants came into the community. This area is important to a city in rapid growth for providing a transitional zone. But thanks to its passivity and low development, 'village in city' has caused lots of trouble to the city. Densities of population and construction are both high, while facilities are severely inadequate. Unplanned and illegal structures are built, which creates a complex mixed-function area and leads to a bad residential area. Besides, the locals have a strong property right consciousness for the land. It holds back the transformation and development of the community. Although the land capitalization can bring significant benefits, it’s inappropriate to make a great financial compensation to the locals, and considering the large population of city migrants, it’s important to explore the relationship among the 'village in city,' city immigrants and the city itself. Taking the example of Hangzhou, this paper analyzed the developing process, functions spatial distribution, industrial structure and current traffic system of 'village in city.' Above the research on the community, this paper put forward a common method to make urban planning through the following ways: adding city functions, building civil facilities, re-planning functions spatial distribution, changing the constitution of local industry and planning new traffic system. Under this plan, 'village in city' finally can be absorbed into cities and make its own contribution to the urbanization.Keywords: China, city immigrant, urbanization, village in city
Procedia PDF Downloads 21710266 Hypoglycemic and Hypolipidemic Effects of Aqueous Flower Extract from Nyctanthes arbor-tristis L.
Authors: Brahmanage S. Rangika, Dinithi C. Peiris
Abstract:
Boiled Aqueous Flower Extract (AFE) of Nyctanthes arbor-tristis L. (Family: Oleaceae) is used in traditional Sri Lankan medicinal system to treat diabetes. However, this is not scientifically proven and the mechanisms by which the flowers reduce diabetes have not been investigated. The present study was carried out to examine the hypoglycemic potential and toxicity effects of aqueous flower extract of N. arbor-tristis. AFE was prepared and mice were treated orally either with 250, 500, and 750 mg/kg of AFE or distilled water (Control). Fasting and random blood glucose levels were determined. In addition, the toxicity of AFE was determined using chronic oral administration. In normoglycemic mice, mid dose (500mg/kg) of AFE significantly (p < 0.01) reduced fasting blood glucose levels by 49% at 4h post treatment. Further, 500mg/kg of AFE significantly (p < 0.01) lowered random blood glucose level of non-fasted normoglycemic mice. AFE significantly lowered total cholesterol and triglyceride levels while increasing the HDL levels in the serum. Further, AFE significantly inhibited the glucose absorption from the lumen of the intestine and it increases the diaphragm uptake of glucose. Alpha-amylase inhibitory activity was also evident. However, AFE did not induce any overt signs of toxicity or hepatotoxicity. There were no adverse effects on food and water intake and body weight of mice during the experimental period. It can be concluded that AFE of N. arbor-tristis posses safe oral anti diabetic potentials mediated via multiple mechanisms. Results of the present study scientifically proved the claims made about the uses of N. arbor-tristis in the treatment of diabetes mellitus in traditional Sri Lankan medicinal system. Further, flowers can also be used for as a remedy to improve blood lipid profile.Keywords: aqueous extract, hypoglycemic hypolipidemic, Nyctanthes arbor-tristis flowers, hepatotoxicity
Procedia PDF Downloads 37010265 Removal of Cr (VI) from Water through Adsorption Process Using GO/PVA as Nanosorbent
Authors: Syed Hadi Hasan, Devendra Kumar Singh, Viyaj Kumar
Abstract:
Cr (VI) is a known toxic heavy metal and has been considered as a priority pollutant in water. The effluent of various industries including electroplating, anodizing baths, leather tanning, steel industries and chromium based catalyst are the major source of Cr (VI) contamination in the aquatic environment. Cr (VI) show high mobility in the environment and can easily penetrate cell membrane of the living tissues to exert noxious effects. The Cr (VI) contamination in drinking water causes various hazardous health effects to the human health such as cancer, skin and stomach irritation or ulceration, dermatitis, damage to liver, kidney circulation and nerve tissue damage. Herein, an attempt has been done to develop an efficient adsorbent for the removal of Cr (VI) from water. For this purpose nanosorbent composed of polyvinyl alcohol functionalized graphene oxide (GO/PVA) was prepared. Thus, obtained GO/PVA was characterized through FTIR, XRD, SEM, and Raman Spectroscopy. As prepared nanosorbent of GO/PVA was utilized for the removal Cr (VI) in batch mode experiment. The process variables such as contact time, initial Cr (VI) concentration, pH, and temperature were optimized. The maximum 99.8 % removal of Cr (VI) was achieved at initial Cr (VI) concentration 60 mg/L, pH 2, temperature 35 °C and equilibrium was achieved within 50 min. The two widely used isotherm models viz. Langmuir and Freundlich were analyzed using linear correlation coefficient (R2) and it was found that Langmuir model gives best fit with high value of R2 for the data of present adsorption system which indicate the monolayer adsorption of Cr (VI) on the GO/PVA. Kinetic studies were also conducted using pseudo-first order and pseudo-second order models and it was observed that chemosorptive pseudo-second order model described the kinetics of current adsorption system in better way with high value of correlation coefficient. Thermodynamic studies were also conducted and results showed that the adsorption was spontaneous and endothermic in nature.Keywords: adsorption, GO/PVA, isotherm, kinetics, nanosorbent, thermodynamics
Procedia PDF Downloads 38910264 Cooperative Agents to Prevent and Mitigate Distributed Denial of Service Attacks of Internet of Things Devices in Transportation Systems
Authors: Borhan Marzougui
Abstract:
Road and Transport Authority (RTA) is moving ahead with the implementation of the leader’s vision in exploring all avenues that may bring better security and safety services to the community. Smart transport means using smart technologies such as IoT (Internet of Things). This technology continues to affirm its important role in the context of Information and Transportation Systems. In fact, IoT is a network of Internet-connected objects able to collect and exchange different data using embedded sensors. With the growth of IoT, Distributed Denial of Service (DDoS) attacks is also growing exponentially. DDoS attacks are the major and a real threat to various transportation services. Currently, the defense mechanisms are mainly passive in nature, and there is a need to develop a smart technique to handle them. In fact, new IoT devices are being used into a botnet for DDoS attackers to accumulate for attacker purposes. The aim of this paper is to provide a relevant understanding of dangerous types of DDoS attack related to IoT and to provide valuable guidance for the future IoT security method. Our methodology is based on development of the distributed algorithm. This algorithm manipulates dedicated intelligent and cooperative agents to prevent and to mitigate DDOS attacks. The proposed technique ensure a preventive action when a malicious packets start to be distributed through the connected node (Network of IoT devices). In addition, the devices such as camera and radio frequency identification (RFID) are connected within the secured network, and the data generated by it are analyzed in real time by intelligent and cooperative agents. The proposed security system is based on a multi-agent system. The obtained result has shown a significant reduction of a number of infected devices and enhanced the capabilities of different security dispositives.Keywords: IoT, DDoS, attacks, botnet, security, agents
Procedia PDF Downloads 14310263 An Investigation Enhancing E-Voting Application Performance
Authors: Aditya Verma
Abstract:
E-voting using blockchain provides us with a distributed system where data is present on each node present in the network and is reliable and secure too due to its immutability property. This work compares various blockchain consensus algorithms used for e-voting applications in the past, based on performance and node scalability, and chooses the optimal one and improves on one such previous implementation by proposing solutions for the loopholes of the optimally working blockchain consensus algorithm, in our chosen application, e-voting.Keywords: blockchain, parallel bft, consensus algorithms, performance
Procedia PDF Downloads 16710262 An Investigation of Wind Loading Effects on the Design of Elevated Steel Tanks with Lattice Tower Supporting Structures
Authors: J. van Vuuren, D. J. van Vuuren, R. Muigai
Abstract:
In recent times, South Africa has experienced extensive droughts that created the need for reliable small water reservoirs. These reservoirs have comparatively quick fabrication and installation times compared to market alternatives. An elevated water tank has inherent potential energy, resulting in that no additional water pumps are required to sustain water pressure at the outlet point – thus ensuring that, without electricity, a water source is available. The initial construction formwork and the complex geometric shape of concrete towers that requires casting can become time-consuming, rendering steel towers preferable. Reinforced concrete foundations, cast in advance, are required to be of sufficient strength. Thereafter, the prefabricated steel supporting structure and tank, which consist of steel panels, can be assembled and erected on site within a couple of days. Due to the time effectiveness of this system, it has become a popular solution to aid drought-stricken areas. These sites are normally in rural, schools or farmland areas. As these tanks can contain up to 2000kL (approximately 19.62MN) of water, combined with supporting lattice steel structures ranging between 5m and 30m in height, failure of one of the supporting members will result in system failure. Thus, there is a need to gain a comprehensive understanding of the operation conditions because of wind loadings on both the tank and the supporting structure. The aim of the research is to investigate the relationship between the theoretical wind loading on a lattice steel tower in combination with an elevated sectional steel tank, and the current wind loading codes, as applicable to South Africa. The research compares the respective design parameters (both theoretical and wind loading codes) whereby FEA analyses are conducted on the various design solutions. The currently available wind loading codes are not sufficient to design slender cantilever latticed steel towers that support elevated water storage tanks. Numerous factors in the design codes are not comprehensively considered when designing the system as these codes are dependent on various assumptions. Factors that require investigation for the study are; the wind loading angle to the face of the structure that will result in maximum load; the internal structural effects on models with different bracing patterns; the loading influence of the aspect ratio of the tank; and the clearance height of the tank on the structural members. Wind loads, as the variable that results in the highest failure rate of cantilevered lattice steel tower structures, require greater understanding. This study aims to contribute towards the design process of elevated steel tanks with lattice tower supporting structures.Keywords: aspect ratio, bracing patterns, clearance height, elevated steel tanks, lattice steel tower, wind loads
Procedia PDF Downloads 15010261 Life Cycle Assessment of Mass Timber Structure, Construction Process as System Boundary
Authors: Mahboobeh Hemmati, Tahar Messadi, Hongmei Gu
Abstract:
Today, life cycle assessment (LCA) is a leading method in mitigating the environmental impacts emerging from the building sector. In this paper, LCA is used to quantify the Green House Gas (GHG) emissions during the construction phase of the largest mass timber residential structure in the United States, Adohi Hall. This building is a 200,000 square foot 708-bed complex located on the campus of the University of Arkansas. The energy used for buildings’ operation is the most dominant source of emissions in the building industry. Lately, however, the efforts were successful at increasing the efficiency of building operation in terms of emissions. As a result, the attention is now shifted to the embodied carbon, which is more noticeable in the building life cycle. Unfortunately, most of the studies have, however, focused on the manufacturing stage, and only a few have addressed to date the construction process. Specifically, less data is available about environmental impacts associated with the construction of mass timber. This study presents, therefore, an assessment of the environmental impact of the construction processes based on the real and newly built mass timber building mentioned above. The system boundary of this study covers modules A4 and A5 based on building LCA standard EN 15978. Module A4 includes material and equipment transportation. Module A5 covers the construction and installation process. This research evolves through 2 stages: first, to quantify materials and equipment deployed in the building, and second, to determine the embodied carbon associated with running equipment for construction materials, both transported to, and installed on, the site where the edifice is built. The Global Warming Potential (GWP) of the building is the primary metric considered in this research. The outcomes of this study bring to the front a better understanding of hotspots in terms of emission during the construction process. Moreover, the comparative analysis of the mass timber construction process with that of a theoretically similar steel building will enable an effective assessment of the environmental efficiency of mass timber.Keywords: construction process, GWP, LCA, mass timber
Procedia PDF Downloads 16710260 E-Learning Platform for School Kids
Authors: Gihan Thilakarathna, Fernando Ishara, Rathnayake Yasith, Bandara A. M. R. Y.
Abstract:
E-learning is a crucial component of intelligent education. Even in the midst of a pandemic, E-learning is becoming increasingly important in the educational system. Several e-learning programs are accessible for students. Here, we decided to create an e-learning framework for children. We've found a few issues that teachers are having with their online classes. When there are numerous students in an online classroom, how does a teacher recognize a student's focus on academics and below-the-surface behaviors? Some kids are not paying attention in class, and others are napping. The teacher is unable to keep track of each and every student. Key challenge in e-learning is online exams. Because students can cheat easily during online exams. Hence there is need of exam proctoring is occurred. In here we propose an automated online exam cheating detection method using a web camera. The purpose of this project is to present an E-learning platform for math education and include games for kids as an alternative teaching method for math students. The game will be accessible via a web browser. The imagery in the game is drawn in a cartoonish style. This will help students learn math through games. Everything in this day and age is moving towards automation. However, automatic answer evaluation is only available for MCQ-based questions. As a result, the checker has a difficult time evaluating the theory solution. The current system requires more manpower and takes a long time to evaluate responses. It's also possible to mark two identical responses differently and receive two different grades. As a result, this application employs machine learning techniques to provide an automatic evaluation of subjective responses based on the keyword provided to the computer as student input, resulting in a fair distribution of marks. In addition, it will save time and manpower. We used deep learning, machine learning, image processing and natural language technologies to develop these research components.Keywords: math, education games, e-learning platform, artificial intelligence
Procedia PDF Downloads 15610259 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining
Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj
Abstract:
Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.Keywords: data mining, SME growth, success factors, web mining
Procedia PDF Downloads 26710258 Technological and Economic Investigation of Concentrated Photovoltaic and Thermal Systems: A Case Study of Iran
Authors: Moloud Torkandam
Abstract:
Any cities must be designed and built in a way that minimizes their need for fossil fuel. Undoubtedly, the necessity of accepting this principle in the previous eras is undeniable with respect to the mode of constructions. Perhaps only due to the great diversity of materials and new technologies in the contemporary era, such a principle in buildings has been forgotten. The question of optimizing energy consumption in buildings has attracted a great deal of attention in many countries and, in this way, they have been able to cut down the consumption of energy up to 30 percent. The energy consumption is remarkably higher than global standards in our country, and the most important reason is the undesirable state of buildings from the standpoint of energy consumption. In addition to providing the means to protect the natural and fuel resources for the future generations, reducing the use of fossil energies may also bring about desirable outcomes such as the decrease in greenhouse gases (whose emissions cause global warming, the melting of polar ice, the rise in sea level and the climatic changes of the planet earth), the decrease in the destructive effects of contamination in residential complexes and especially urban environments and preparation for national self-sufficiency and the country’s independence and preserving national capitals. This research realize that in this modern day and age, living sustainably is a pre-requisite for ensuring a bright future and high quality of life. In acquiring this living standard, we will maintain the functions and ability of our environment to serve and sustain our livelihoods. Electricity is now an integral part of modern life, a basic necessity. In the provision of electricity, we are committed to respecting the environment by reducing the use of fossil fuels through the use of proven technologies that use local renewable and natural resources as its energy source. As far as this research concerned it is completely necessary to work on different type of energy producing such as solar and CPVT system.Keywords: energy, photovoltaic, termal system, solar energy, CPVT
Procedia PDF Downloads 8210257 A Numerical Model for Simulation of Blood Flow in Vascular Networks
Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia
Abstract:
An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system
Procedia PDF Downloads 27210256 Teratogenic Effect of Bisphenol A in Development of Balb/C Mouse
Authors: Nazihe Sedighi, Mohsen Nokhbatolphoghaei
Abstract:
Bisphenol A (BPA) is a monomer used in the manufacture of polycarbonate plastics. Due to having properties such as transparency, heat and impact resistance, it is used widely in medicine, sorts, electronic components, and food containers. It is also used in the production of resins which is applied for lining cans. BPA releases from resins and polycarbonate when it is heated or continuously used the containers from which BPA can enter the body. There are several reports indicating the presence of BPA in the placenta, amniotic fluid, and the embryo itself. While researchers investigated the teratogenic effect of BPA on embryos, very limited work has been done on the effects of BPA when applied from early stages of development. In this study, The teratogenic effect of BPA was investigated at earliest preimplantation (day zero) through day 15.5 of the development of Balb/C mouse embryos. After ensuring the pregnancy via observing vaginal plug, Pregnant mice were divided into five groups. For the three experimental groups, the amount of 500, 750, and 1000 mg/kg/d Bisphenol A was given orally according to body weight. The sham group that was treated with sesame oil, which was used as vehicle and control group remained intact. On day 18.5 of gestation, embryos were removed from the uterus. Randomly half of the embryo were fixed in Bouin for tissue analysis. The other half were prepared for skeletal system staining using Alizarin Red and alcian blue dies. The results showed that the embryonic weight and the crown-rump length of embryos decreased significantly (P < 0.05) in all experimental groups compared to the control group and the sham. In this study, skeletal abnormalities such as delay in ossification of skull and limbs as well as the deviation in the backbone were seen. This research suggests that pregnant mothers need to be aware of possible teratogenic effects of BPA at any stage of pregnancy especially from early to mid stages. In this case, pregnant mothers may need to stop using any manufacture of polycarbonate plastics, as a container for food or drinking.Keywords: bisphenol A, development, polycarbonate plastic, skeletal system, teratogenicity
Procedia PDF Downloads 29410255 Oil Logistics for Refining to Northern Europe
Authors: Vladimir Klepikov
Abstract:
To develop the programs to supply crude oil to North European refineries, it is necessary to take into account the refineries’ location, crude refining capacity, and the transport infrastructure capacity. Among the countries of the region, we include those having a marine boundary along the Northern Sea and the Baltic Sea (from France in the west to Finland in the east). The paper envisages the geographic allocation of the refineries and contains the evaluation of the refineries’ capacities for the region under review. The sustainable operations of refineries in the region are determined by the transportation system capacity to supply crude oil to them. The assessment of capacity of crude oil transportation to the refineries is conducted. The research is performed for the period of 2005/2015, using the quantitative analysis method. The countries are classified by the refineries’ aggregate capacities and the crude oil output on their territory. The crude oil output capacities in the region in the period under review are determined. The capacities of the region’s transportation system to supply crude oil produced in the region to the refineries are revealed. The analysis suggested that imported raw materials are the main source of oil for the refineries in the region. The main sources of crude oil supplies to North European refineries are reviewed. The change in the refineries’ capacities in the group of countries and each particular country, as well as the utilization of the refineries' capacities in the region in the period under review, was studied. The input suggests that the bulk of crude oil is supplied by marine and pipeline transport. The paper contains the assessment of the crude oil transportation by pipeline transport in the overall crude oil cargo flow. The refineries’ production rate for the groups of countries under the review and for each particular country was the subject of study. Our study yielded the trend towards the increase in the crude oil refining at the refineries of the region and reduction in the crude oil output. If this trend persists in the near future, the cargo flow of imported crude oil and the utilization of the North European logistics infrastructure may increase. According to the study, the existing transport infrastructure in the region is able to handle the increasing imported crude oil flow.Keywords: European region, infrastructure, oil terminal capacity, pipeline capacity, tanker draft
Procedia PDF Downloads 17210254 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 10010253 Brain Connectome of Glia, Axons, and Neurons: Cognitive Model of Analogy
Authors: Ozgu Hafizoglu
Abstract:
An analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with physical, behavioral, principal relations that are essential to learning, discovery, and innovation. The Cognitive Model of Analogy (CMA) leads and creates patterns of pathways to transfer information within and between domains in science, just as happens in the brain. The connectome of the brain shows how the brain operates with mental leaps between domains and mental hops within domains and the way how analogical reasoning mechanism operates. This paper demonstrates the CMA as an evolutionary approach to science, technology, and life. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions in the new era, especially post-pandemic. In this paper, we will reveal how to draw an analogy to scientific research to discover new systems that reveal the fractal schema of analogical reasoning within and between the systems like within and between the brain regions. Distinct phases of the problem-solving processes are divided thusly: stimulus, encoding, mapping, inference, and response. Based on the brain research so far, the system is revealed to be relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain’s mechanism in macro context; brain and spinal cord, and micro context: glia and neurons, relative to matching conditions of analogical reasoning and relational information, encoding, mapping, inference and response processes, and verification of perceptual responses in four-term analogical reasoning. Finally, we will relate all these terminologies with these mental leaps, mental maps, mental hops, and mental loops to make the mental model of CMA clear.Keywords: analogy, analogical reasoning, brain connectome, cognitive model, neurons and glia, mental leaps, mental hops, mental loops
Procedia PDF Downloads 16510252 Wearable Jacket for Game-Based Post-Stroke Arm Rehabilitation
Authors: A. Raj Kumar, A. Okunseinde, P. Raghavan, V. Kapila
Abstract:
Stroke is the leading cause of adult disability worldwide. With recent advances in immediate post-stroke care, there is an increasing number of young stroke survivors, under the age of 65 years. While most stroke survivors will regain the ability to walk, they often experience long-term arm and hand motor impairments. Long term upper limb rehabilitation is needed to restore movement and function, and prevent deterioration from complications such as learned non-use and learned bad-use. We have developed a novel virtual coach, a wearable instrumented rehabilitation jacket, to motivate individuals to participate in long-term skill re-learning, that can be personalized to their impairment profile. The jacket can estimate the movements of an individual’s arms using embedded off-the-shelf sensors (e.g., 9-DOF IMU for inertial measurements, flex-sensors for measuring angular orientation of fingers) and a Bluetooth Low Energy (BLE) powered microcontroller (e.g., RFduino) to non-intrusively extract data. The 9-DOF IMU sensors contain 3-axis accelerometer, 3-axis gyroscope, and 3-axis magnetometer to compute the quaternions, which are transmitted to a computer to compute the Euler angles and estimate the angular orientation of the arms. The data are used in a gaming environment to provide visual, and/or haptic feedback for goal-based, augmented-reality training to facilitate re-learning in a cost-effective, evidence-based manner. The full paper will elaborate the technical aspects of communication, interactive gaming environment, and physical aspects of electronics necessary to achieve our stated goal. Moreover, the paper will suggest methods to utilize the proposed system as a cheaper, portable, and versatile system vis-à-vis existing instrumentation to facilitate post-stroke personalized arm rehabilitation.Keywords: feedback, gaming, Euler angles, rehabilitation, augmented reality
Procedia PDF Downloads 27710251 Τhe Importance of Previous Examination Results, in Futural Differential Diagnostic Procedures and Especially in the Era of Covid-19
Authors: Angelis P. Barlampas
Abstract:
Purpose or Learning Objective It is well known that previous examinations play a major role in futural diagnosis, thus avoiding unnecessary new exams that cost in time and money both for the patient and the health system. A case is presented in which past patient’s results, in combination with the least needed new tests, give an easy final diagnosis. Methods or Background A middle aged man visited the emergency department complaining of hard controlled, persisting fever for the last few days. Laboratory tests showed an elevated number of white blood cells with neutrophil shift and abnormal CRP. The patient was admitted to hospital a month ago for continuing lungs symptomatology after a recent covid-19 infection. Results or Findings Computed tomography scanning showed a solid mass with spiculating margins in right lower lobe. After intravenous iodine contrast administration, there was mildly peripheral enhancement and eccentric non enhancing area. A pneumonic cancer was suspected. Comparison with the patient’s latest computed tomography revealed no mass in the area of interest but only signs of recent post covid-19 lung parenchyma abnormalities. Any new mass that appears in a month’s time span can not be a cancer but a benign lesion. It was obvious that an abscess was the most suitable explanation. The patient was admitted to hospital, and antibiotic therapy was given, with very good results. After a few days, the patient was afebrile and in good condition. Conclusion In this case , a PET scan or a biopsy was avoided, thanks to the patient’s medical history and the availability of previous examinations. It is worthy encouraging the patients to keep their medical records and organizing more efficiently the health system with the current technology of archiving the medical examinations, too.Keywords: covid-19, chest ct, cancer, abscess, fever
Procedia PDF Downloads 6010250 Pathological Disparities in Patients Diagnosed with Prostate Imaging Reporting and Data System 3 Lesions: A Retrospective Study in a High-Volume Academic Center
Authors: M. Reza Roshandel, Tannaz Aghaei Badr, Batoul Khoundabi, Sara C. Lewis, Soroush Rais-Bahrami, John Sfakianos, Reza Mehrazin, Ash K. Tewari
Abstract:
Introduction: Prostate biopsy is the most reliable diagnostic method for choosing the appropriate management of prostate cancer. However, discrepancies between Gleason grade groups (GG) of different biopsies remain a significant concern. This study aims to assess the association of the radiological factors with GG discrepancies in patients with index Prostate Imaging Reporting and Data System (PI-RADS) 3 lesions, using radical prostatectomy (RP) specimens as the most accurate and informative pathology. Methods: This single-institutional retrospective study was performed on a total of 2289 consecutive prostate cancer patients with combined targeted and systematic prostate biopsy followed by radical prostatectomy (RP). The database was explored for patients with the index PI-RADS 3 lesions version 2 and 2.1. Cancers with PI-RADS 4 or 5 scoring were excluded from the study. Patient characteristics and radiologic features were analyzed by multivariable logistic regression. Number-density of lesions was defined as the number of lesions per prostatic volume. Results: Of the 151 prostate cancer cases with PI-RADS 3 index lesions, 27% and 17% had upgrades and downgrades at RP, respectively. Analysis of grade changes showed no significant associations between discrepancies and the number or the number density of PI-RADS 3 lesions. Moreover, the study showed no significant association of the GG changes with race, age, location of the lesions, or prostate volume. Conclusions: This study demonstrated that in PI-RADS 3 cancerous nodules, the chance of the pathology changes in the final pathology of RP specimens was low. Furthermore, having multiple PI-RADS 3 nodules did not change the conclusion, as the possibility of grade changes in patients with multiple nodules was similar to those with solitary lesions.Keywords: prostate, adenocarcinoma, multiparametric MRI, Gleason score, robot-assisted surgery
Procedia PDF Downloads 13310249 Healthy Feeding and Drinking Troughs for Profitable Intensive Deep-Litter Poultry Farming
Authors: Godwin Ojochogu Adejo, Evelyn UnekwuOjo Adejo, Sunday UnenwOjo Adejo
Abstract:
The mainstream contemporary approach to controlling the impact of diseases among poultry birds rely largely on curative measures through the administration of drugs to infected birds. Most times as observed in the deep liter poultry farming system, entire flocks including uninfected birds receive the treatment they do not need. As such, unguarded use of chemical drugs and antibiotics has led to wastage and accumulation of chemical residues in poultry products with associated health hazards to humans. However, wanton and frequent drug usage in poultry is avoidable if feeding and drinking equipment are designed to curb infection transmission among birds. Using toxicological assays as guide and with efficiency and simplicity in view, two newly field-tested and recently patented equipments called 'healthy liquid drinking trough (HDT)' and 'healthy feeding trough (HFT)' that systematically eliminate contamination of the feeding and drinking channels, thereby, curbing wide-spread infection and transmission of diseases in the (intensive) deep litter poultry farming system were designed. Upon combined usage, they automatically and drastically reduced both the amount and frequency of antibiotics use in poultry by over > 50%. Additionally, they conferred optimization of feed and water utilization/elimination of wastage by > 80%, reduced labour by > 70%, reduced production cost by about 15%, and reduced chemical residues in poultry meat or eggs by > 85%. These new and cheap technologies which require no energy input are likely to elevate safety of poultry products for consumers' health, increase marketability locally and for export, and increase output and profit especially among poultry farmers and poor people who keep poultry or inevitably utilize poultry products in developing countries.Keywords: healthy, trough, toxicological, assay-guided, poultry
Procedia PDF Downloads 15610248 Transmission Line Protection Challenges under High Penetration of Renewable Energy Sources and Proposed Solutions: A Review
Authors: Melake Kuflom
Abstract:
European power networks involve the use of multiple overhead transmission lines to construct a highly duplicated system that delivers reliable and stable electrical energy to the distribution level. The transmission line protection applied in the existing GB transmission network are normally independent unit differential and time stepped distance protection schemes, referred to as main-1 & main-2 respectively, with overcurrent protection as a backup. The increasing penetration of renewable energy sources, commonly referred as “weak sources,” into the power network resulted in the decline of fault level. Traditionally, the fault level of the GB transmission network has been strong; hence the fault current contribution is more than sufficient to ensure the correct operation of the protection schemes. However, numerous conventional coal and nuclear generators have been or about to shut down due to the societal requirement for CO2 emission reduction, and this has resulted in a reduction in the fault level on some transmission lines, and therefore an adaptive transmission line protection is required. Generally, greater utilization of renewable energy sources generated from wind or direct solar energy results in a reduction of CO2 carbon emission and can increase the system security and reliability but reduces the fault level, which has an adverse effect on protection. Consequently, the effectiveness of conventional protection schemes under low fault levels needs to be reviewed, particularly for future GB transmission network operating scenarios. The proposed paper will evaluate the transmission line challenges under high penetration of renewable energy sources andprovides alternative viable protection solutions based on the problem observed. The paper will consider the assessment ofrenewable energy sources (RES) based on a fully rated converter technology. The DIgSILENT Power Factory software tool will be used to model the network.Keywords: fault level, protection schemes, relay settings, relay coordination, renewable energy sources
Procedia PDF Downloads 20610247 Photodegradation of Profoxydim Herbicide in Amended Paddy Soil-Water System
Authors: A. Cervantes-Diaz, B. Sevilla-Moran, Manuel Alcami, Al Mokhtar Lamsabhi, J. L. Alonso-Prados, P. Sandin-España
Abstract:
Profoxydim is a post-emergence herbicide belonging to the cyclohexanedione oxime family, used to control weeds in rice crops. The use of soil organic amendments has increased significantly in the last decades, and their effects on the behavior of many herbicides are still unknown. Additionally, it is known that photolysis is an important degradation process to be considered when evaluating the persistence of this family of herbicides in the environment. In this work, the photodegradation of profoxydim in an amended paddy soil-water system with alperujo compost was studied. Photodegradation experiments were carried out under laboratory conditions using simulated solar light (Suntest equipment) in order to evaluate the reaction kinetics of the active substance. The photochemical behavior of profoxydim was investigated in soil with and without alperujo amendment. Furthermore, due to the rice crop characteristics, profoxydim photodegradation in water in contact with these types of soils was also studied. Determination of profoxydim degradation kinetics was performed by High-Performance Liquid Chromatography with Diode-Array Detection (HPLC-DAD). Furthermore, we followed the evolution of resulting transformation by-products, and their tentative identification was achieved by mass spectrometry. All the experiments allowed us to fit the data of profoxydim photodegradation to a first-order kinetic. Photodegradation of profoxydim was very rapid in all cases. The half-lives in aqueous matrices were in the range of 86±0.3 to 103±0.5 min. The addition of alperujo amendment to the soil produced an increase in the half-life from 62±0.2 min (soil) to 75±0.3 min (amended soil). In addition, a comparison to other organic amendments was also performed. Results showed that the presence of the organic amendment retarded the photodegradation in paddy soil and water. Regarding degradation products, the main process involved was the cleavage of the oxime moiety giving rise to the formation of the corresponding imine compound.Keywords: by-products, herbicide, organic amendment, photodegradation, profoxydim
Procedia PDF Downloads 7910246 Integration of Immigrant Students into Local Education System
Authors: Suheyla Demi̇rkol Orak
Abstract:
The requirement of inclusive education is one of the utmost important results of both regular and irregular immigration. The matter in the case of Syrian immigrants is even worse than the other immigrants cases in world history since a massive immigration wave has affected all world countries' socio-economic profiles. When Syrians immigrated from Syria all over the world, they aimed to survive and left behind the war, but surviving is not optional occasion without handling language-related problems. Humans exist and preserve their existence with their language. That is a matter of concern for the integration of Syrians into the hosting countries. Many countries are proceeding with various programs to integrate Syrians into the majority groups by either assimilation or adaptation policies. Turkey has got the lion's share of the Syrian immigration apple, and in the same vein with this situation, its language education system should be analyzed severely in order to come up with a perfect match program for the integration of Syrians. It aimed to generate an inclusive education model for catalyzing the integration process of immigrant Syrian students into the majority socio-economic group via overcoming the language barrier. The identity of the immigrants is prioritized. The study follows a narrative literature review, which aims to review and critique relevant literature and offers a new conceptualization derived from the previous literature. The study derives a critical localized bilingual education model. As the outcome of the narrative literature review, a bilingual education model which prioritized the identity of the target community was designed. In the present study, main bilingual education programs and most of the countries' bilingual education policies were reviewed critically and suggestions were listed for the Syrian immigrants dominantly in Turkey and suggested to be benefitted by the other countries through localizing the practices.Keywords: bi/multilingual education, sheltered education, immigrants, glocalization, submersion program, immersion program
Procedia PDF Downloads 8510245 Urinary Exosome miR-30c-5p as a Biomarker for Early-Stage Clear Cell Renal Cell Carcinoma
Authors: Shangqing Song, Bin Xu, Yajun Cheng, Zhong Wang
Abstract:
miRNAs derived from exosomes exist in a body fluid such as urine were regarded as potential biomarkers for various human cancers diagnosis and prognosis, as mature miRNAs can be steadily preserved by exosomes. However, its potential value in clear cell renal cell carcinoma (ccRCC) diagnosis and prognosis remains unclear. In the present study, differentially expressed miRNAs from urinal exosomes were identified by next-generation sequencing (NGS) technology. The 16 differentially expressed miRNAs were identified between ccRCC patients and healthy donors. To explore the specific diagnosis biomarker of ccRCC, we validated these urinary exosomes from 70 early-stage renal cancer patients, 30 healthy people and other urinary system cancers, including 30 early-stage prostate cancer patients and 30 early-stage bladder cancer patients by qRT-PCR. The results showed that urinary exosome miR-30c-5p could be stably amplified and meanwhile the expression of miR-30c-5p has no significant difference between other urinary system cancers and healthy control, however, expression level of miR-30c-5p in urinary exosomal of ccRCC patients was lower than healthy people and receiver operation characterization (ROC) curve showed that the area under the curve (AUC) values was 0.8192 (95% confidence interval was 0.7388-0.8996, P= 0.0000). In addition, up-regulating miR-30c-5p expression could inhibit renal cell carcinoma cells growth. Lastly, HSP5A was found as a direct target gene of miR-30c-5p. HSP5A depletion reversed the promoting effect of ccRCC growth casued by miR-30c-5p inhibitor, respectively. In conclusion, this study demonstrated that urinary exosomal miR-30c-5p is readily accessible as diagnosis biomarker of early-stage ccRCC, and miR-30c-5p might modulate the expression of HSPA5, which correlated with the progression of ccRCC.Keywords: clear cell renal cell carcinoma, exosome, HSP5A, miR-30c-5p
Procedia PDF Downloads 26710244 Impact of Simulated Brain Interstitial Fluid Flow on the Chemokine CXC-Chemokine-Ligand-12 Release From an Alginate-Based Hydrogel
Authors: Wiam El Kheir, Anais Dumais, Maude Beaudoin, Bernard Marcos, Nick Virgilio, Benoit Paquette, Nathalie Faucheux, Marc-Antoine Lauzon
Abstract:
The high infiltrative pattern of glioblastoma multiforme cells (GBM) is the main cause responsible for the actual standard treatments failure. The tumor high heterogeneity, the interstitial fluid flow (IFF) and chemokines guides GBM cells migration in the brain parenchyma resulting in tumor recurrence. Drug delivery systems emerged as an alternative approach to develop effective treatments for the disease. Some recent studies have proposed to harness the effect CXC-lchemokine-ligand-12 to direct and control the cancer cell migration through delivery system. However, the dynamics of the brain environment on the delivery system remains poorly understood. Nanoparticles (NPs) and hydrogels are known as good carriers for the encapsulation of different agents and control their release. We studied the release of CXCL12 (free or loaded into NPs) from an alginate-based hydrogel under static and indirect perfusion (IP) conditions. Under static conditions, the main phenomena driving CXCL12 release from the hydrogel was diffusion with the presence of strong interactions between the positively charged CXCL12 and the negatively charge alginate. CXCL12 release profiles were independent from the initial mass loadings. Afterwards, we demonstrated that the release could tuned by loading CXCL12 into Alginate/Chitosan-Nanoparticles (Alg/Chit-NPs) and embedded them into alginate-hydrogel. The initial burst release was substantially attenuated and the overall cumulative release percentages of 21%, 16% and 7% were observed for initial mass loadings of 0.07, 0.13 and 0.26 µg, respectively, suggesting stronger electrostatic interactions. Results were mathematically modeled based on Fick’s second law of diffusion framework developed previously to estimate the effective diffusion coefficient (Deff) and the mass transfer coefficient. Embedding the CXCL12 into NPs decreased the Deff an order of magnitude, which was coherent with experimental data. Thereafter, we developed an in-vitro 3D model that takes into consideration the convective contribution of the brain IFF to study CXCL12 release in an in-vitro microenvironment that mimics as faithfully as possible the human brain. From is unique design, the model also allowed us to understand the effect of IP on CXCL12 release in respect to time and space. Four flow rates (0.5, 3, 6.5 and 10 µL/min) which may increase CXCL12 release in-vivo depending on the tumor location were assessed. Under IP, cumulative percentages varying between 4.5-7.3%, 23-58.5%, 77.8-92.5% and 89.2-95.9% were released for the three initial mass loadings of 0.08, 0.16 and 0.33 µg, respectively. As the flow rate increase, IP culture conditions resulted in a higher release of CXCL12 compared to static conditions as the convection contribution became the main driving mass transport phenomena. Further, depending on the flow rate, IP had a direct impact on CXCL12 distribution within the simulated brain tissue, which illustrates the importance of developing such 3D in-vitro models to assess the efficiency of a delivery system targeting the brain. In future work, using this very model, we aim to understand the impact of the different phenomenon occurring on GBM cell behaviors in response to the resulting chemokine gradient subjected to various flow while allowing them to express their invasive characteristics in an in-vitro microenvironment that mimics the in-vivo brain parenchyma.Keywords: 3D culture system, chemokines gradient, glioblastoma multiforme, kinetic release, mathematical modeling
Procedia PDF Downloads 8510243 Re-Entrant Direct Hexagonal Phases in a Lyotropic System Induced by Ionic Liquids
Authors: Saheli Mitra, Ramesh Karri, Praveen K. Mylapalli, Arka. B. Dey, Gourav Bhattacharya, Gouriprasanna Roy, Syed M. Kamil, Surajit Dhara, Sunil K. Sinha, Sajal K. Ghosh
Abstract:
The most well-known structures of lyotropic liquid crystalline systems are the two dimensional hexagonal phase of cylindrical micelles with a positive interfacial curvature and the lamellar phase of flat bilayers with zero interfacial curvature. In aqueous solution of surfactants, the concentration dependent phase transitions have been investigated extensively. However, instead of changing the surfactant concentrations, the local curvature of an aggregate can be altered by tuning the electrostatic interactions among the constituent molecules. Intermediate phases with non-uniform interfacial curvature are still unexplored steps to understand the route of phase transition from hexagonal to lamellar. Understanding such structural evolution in lyotropic liquid crystalline systems is important as it decides the complex rheological behavior of the system, which is one of the main interests of the soft matter industry. Sodium dodecyl sulfate (SDS) is an anionic surfactant and can be considered as a unique system to tune the electrostatics by cationic additives. In present study, imidazolium-based ionic liquids (ILs) with different number of carbon atoms in their single hydrocarbon chain were used as the additive in the aqueous solution of SDS. At a fixed concentration of total non-aqueous components (SDS and IL), the molar ratio of these components was changed, which effectively altered the electrostatic interactions between the SDS molecules. As a result, the local curvature is observed to modify, and correspondingly, the structure of the hexagonal liquid crystalline phases are transformed into other phases. Polarizing optical microscopy of SDS and imidazole-based-IL systems have exhibited different textures of the liquid crystalline phases as a function of increasing concentration of the ILs. The small angle synchrotron x-ray diffraction (SAXD) study has indicated the hexagonal phase of direct cylindrical micelles to transform to a rectangular phase at the presence of short (two hydrocarbons) chain IL. However, the hexagonal phase is transformed to a lamellar phase at the presence of long (ten hydrocarbons) chain IL. Interestingly, at the presence of a medium (four hydrocarbons) chain IL, the hexagonal phase is transformed to another hexagonal phase of direct cylindrical micelles through the lamellar phase. To the best of our knowledge, such a phase sequence has not been reported earlier. Even though the small angle x-ray diffraction study has revealed the lattice parameters of these phases to be similar to each other, their rheological behavior has been distinctly different. These rheological studies have shed lights on how these phases differ in their viscoelastic behavior. Finally, the packing parameters, calculated for these phases based on the geometry of the aggregates, have explained the formation of the self-assembled aggregates.Keywords: lyotropic liquid crystals, polarizing optical microscopy, rheology, surfactants, small angle x-ray diffraction
Procedia PDF Downloads 13810242 Discourse Analysis and Semiotic Researches: Using Michael Halliday's Sociosemiotic Theory
Authors: Deyu Yuan
Abstract:
Discourse analysis as an interdisciplinary approach has more than 60-years-history since it was first named by Zellig Harris in 'Discourse Analysis' on Language in 1952. Ferdinand de Saussure differentiated the 'parole' from the 'langue' that established the principle of focusing on language but not speech. So the rising of discourse analysis can be seen as a discursive turn for the entire language research that closely related to the theory of Speech act. Critical discourse analysis becomes the mainstream of contemporary language research through drawing upon M. A. K. Halliday's socio-semiotic theory and Foucault, Barthes, Bourdieu's views on the sign, discourse, and ideology. So in contrast to general semiotics, social semiotics mainly focuses on parole and the application of semiotic theories to some applicable fields. The article attempts to discuss this applicable sociosemiotics and show the features of it that differ from the Saussurian and Peircian semiotics in four aspects: 1) the sign system is about meaning-generation resource in the social context; 2) the sign system conforms to social and cultural changes with the form of metaphor and connotation; 3) sociosemiotics concerns about five applicable principles including the personal authority principle, non-personal authority principle, consistency principle, model demonstration principle, the expertise principle to deepen specific communication; 4) the study of symbolic functions is targeted to the characteristics of ideational, interpersonal and interactional function in social communication process. Then the paper describes six features which characterize this sociosemiotics as applicable semiotics: social, systematic, usable interdisciplinary, dynamic, and multi-modal characteristics. Thirdly, the paper explores the multi-modal choices of sociosemiotics in the respects of genre, discourse, and style. Finally, the paper discusses the relationship between theory and practice in social semiotics and proposes a relatively comprehensive theoretical framework for social semiotics as applicable semiotics.Keywords: discourse analysis, sociosemiotics, pragmatics, ideology
Procedia PDF Downloads 35210241 Identification of Deposition Sequences of the Organic Content of Lower Albian-Cenomanian Age in Northern Tunisia: Correlation between Molecular and Stratigraphic Fossils
Authors: Tahani Hallek, Dhaou Akrout, Riadh Ahmadi, Mabrouk Montacer
Abstract:
The present work is an organic geochemical study of the Fahdene Formation outcrops at the Mahjouba region belonging to the Eastern part of the Kalaat Senan structure in northwestern Tunisia (the Kef-Tedjerouine area). The analytical study of the organic content of the samples collected, allowed us to point out that the Formation in question is characterized by an average to good oil potential. This fossilized organic matter has a mixed origin (type II and III), as indicated by the relatively high values of hydrogen index. This origin is confirmed by the C29 Steranes abundance and also by tricyclic terpanes C19/(C19+C23) and tetracyclic terpanes C24/(C24+C23) ratios, that suggest a marine environment of deposit with high plants contribution. We have demonstrated that the heterogeneity of organic matter between the marine aspect, confirmed by the presence of foraminifera, and the continental contribution, is the result of an episodic anomaly in relation to the sequential stratigraphy. Given that the study area is defined as an outer platform forming a transition zone between a stable continental domain to the south and a deep basin to the north, we have explained the continental contribution by successive forced regressions, having blocked the albian transgression, allowing the installation of the lowstand system tracts. This aspect is represented by the incised valleys filling, in direct contact with the pelagic and deep sea facies. Consequently, the Fahdene Formation, in the Kef-Tedjerouine area, consists of transgressive system tracts (TST) brutally truncated by extras of continental progradation; resulting in a mixed influence deposition having retained a heterogeneous organic material.Keywords: molecular geochemistry, biomarkers, forced regression, deposit environment, mixed origin, Northern Tunisia
Procedia PDF Downloads 250