Search results for: parallel manipulator
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1204

Search results for: parallel manipulator

64 Index of Suitability for Culex pipiens sl. Mosquitoes in Portugal Mainland

Authors: Maria C. Proença, Maria T. Rebelo, Marília Antunes, Maria J. Alves, Hugo Osório, Sofia Cunha, REVIVE team

Abstract:

The environment of the mosquitoes complex Culex pipiens sl. in Portugal mainland is evaluated based in its abundance, using a data set georeferenced, collected during seven years (2006-2012) from May to October. The suitability of the different regions can be delineated using the relative abundance areas; the suitablility index is directly proportional to disease transmission risk and allows focusing mitigation measures in order to avoid outbreaks of vector-borne diseases. The interest in the Culex pipiens complex is justified by its medical importance: the females bite all warm-blooded vertebrates and are involved in the circulation of several arbovirus of concern to human health, like West Nile virus, iridoviruses, rheoviruses and parvoviruses. The abundance of Culex pipiens mosquitoes were documented systematically all over the territory by the local health services, in a long duration program running since 2006. The environmental factors used to characterize the vector habitat are land use/land cover, distance to cartographed water bodies, altitude and latitude. Focus will be on the mosquito females, which gonotrophic cycle mate-bloodmeal-oviposition is responsible for the virus transmission; its abundance is the key for the planning of non-aggressive prophylactic countermeasures that may eradicate the transmission risk and simultaneously avoid chemical ambient degradation. Meteorological parameters such as: air relative humidity, air temperature (minima, maxima and mean daily temperatures) and daily total rainfall were gathered from the weather stations network for the same dates and crossed with the standardized females’ abundance in a geographic information system (GIS). Mean capture and percentage of above average captures related to each variable are used as criteria to compute a threshold for each meteorological parameter; the difference of the mean capture above/below the threshold was statistically assessed. The meteorological parameters measured at the net of weather stations all over the country are averaged by month and interpolated to produce raster maps that can be segmented according to the meaningful thresholds for each parameter. The intersection of the maps of all the parameters obtained for each month show the evolution of the suitable meteorological conditions through the mosquito season, considered as May to October, although the first and last month are less relevant. In parallel, mean and above average captures were related to the physiographic parameters – the land use/land cover classes most relevant in each month, the altitudes preferred and the most frequent distance to water bodies, a factor closely related with the mosquito biology. The maps produced with these results were crossed with the meteorological maps previously segmented, in order to get an index of suitability for the complex Culex pipiens evaluated all over the country, and its evolution from the beginning to the end of the mosquitoes season.

Keywords: suitability index, Culex pipiens, habitat evolution, GIS model

Procedia PDF Downloads 548
63 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads

Authors: Gaurav Kumar Sinha

Abstract:

In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.

Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies

Procedia PDF Downloads 37
62 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery

Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong

Abstract:

The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.

Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition

Procedia PDF Downloads 257
61 Biochemical Effects of Low Dose Dimethyl Sulfoxide on HepG2 Liver Cancer Cell Line

Authors: Esra Sengul, R. G. Aktas, M. E. Sitar, H. Isan

Abstract:

Hepatocellular carcinoma (HCC) is a hepatocellular tumor commonly found on the surface of the chronic liver. HepG2 is the most commonly used cell type in HCC studies. The main proteins remaining in the blood serum after separation of plasma fibrinogen are albumin and globulin. The fact that the albumin showed hepatocellular damage and reflect the synthesis capacity of the liver was the main reason for our use. Alpha-Fetoprotein (AFP) is an albumin-like structural embryonic globulin found in the embryonic cortex, cord blood, and fetal liver. It has been used as a marker in the follow-up of tumor growth in various malign tumors and in the efficacy of surgical-medical treatments, so it is a good protein to look at with albumins. We have seen the morphological changes of dimethyl sulfoxide (DMSO) on HepG2 and decided to investigate its biochemical effects. We examined the effects of DMSO, which is used in cell cultures, on albumin, AFP and total protein at low doses. Material Method: Cell Culture: Medium was prepared in cell culture using Dulbecco's Modified Eagle Media (DMEM), Fetal Bovine Serum Dulbecco's (FBS), Phosphate Buffered Saline and trypsin maintained at -20 ° C. Fixation of Cells: HepG2 cells, which have been appropriately developed at the end of the first week, were fixed with acetone. We stored our cells in PBS at + 4 ° C until the fixation was completed. Area Calculation: The areas of the cells are calculated in the ImageJ (IJ). Microscope examination: The examination was performed with a Zeiss Inverted Microscope. Daytime photographs were taken at 40x, 100x 200x and 400x. Biochemical Tests: Protein (Total): Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Albumin: Serum sample was analyzed by a spectrophotometric method in autoanalyzer. Alpha-fetoprotein: Serum sample was analyzed by ECLIA method. Results: When liver cancer cells were cultured in medium with 1% DMSO for 4 weeks, a significant difference was observed when compared with the control group. As a result, we have seen that DMSO can be used as an important agent in the treatment of liver cancer. Cell areas were reduced in the DMSO group compared to the control group and the confluency ratio increased. The ability to form spheroids was also significantly higher in the DMSO group. Alpha-fetoprotein was lower than the values of an ordinary liver cancer patient and the total protein amount increased to the reference range of the normal individual. Because the albumin sample was below the specimen value, the numerical results could not be obtained on biochemical examinations. We interpret all these results as making DMSO a caretaking aid. Since each one was not enough alone we used 3 parameters and the results were positive when we refer to the values of a normal healthy individual in parallel. We hope to extend the study further by adding new parameters and genetic analyzes, by increasing the number of samples, and by using DMSO as an adjunct agent in the treatment of liver cancer.

Keywords: hepatocellular carcinoma, HepG2, dimethyl sulfoxide, cell culture, ELISA

Procedia PDF Downloads 109
60 Model-Based Global Maximum Power Point Tracking at Photovoltaic String under Partial Shading Conditions Using Multi-Input Interleaved Boost DC-DC Converter

Authors: Seyed Hossein Hosseini, Seyed Majid Hashemzadeh

Abstract:

Solar energy is one of the remarkable renewable energy sources that have particular characteristics such as unlimited, no environmental pollution, and free access. Generally, solar energy can be used in thermal and photovoltaic (PV) types. The cost of installation of the PV system is very high. Additionally, due to dependence on environmental situations such as solar radiation and ambient temperature, electrical power generation of this system is unpredictable and without power electronics devices, there is no guarantee to maximum power delivery at the output of this system. Maximum power point tracking (MPPT) should be used to achieve the maximum power of a PV string. MPPT is one of the essential parts of the PV system which without this section, it would be impossible to reach the maximum amount of the PV string power and high losses are caused in the PV system. One of the noticeable challenges in the problem of MPPT is the partial shading conditions (PSC). In PSC, the output photocurrent of the PV module under the shadow is less than the PV string current. The difference between the mentioned currents passes from the module's internal parallel resistance and creates a large negative voltage across shaded modules. This significant negative voltage damages the PV module under the shadow. This condition is called hot-spot phenomenon. An anti-paralleled diode is inserted across the PV module to prevent the happening of this phenomenon. This diode is known as the bypass diode. Due to the performance of the bypass diode under PSC, the P-V curve of the PV string has several peaks. One of the P-V curve peaks that makes the maximum available power is the global peak. Model-based Global MPPT (GMPPT) methods can estimate the optimal point with higher speed than other GMPPT approaches. Centralized, modular, and interleaved DC-DC converter topologies are the significant structures that can be used for GMPPT at a PV string. there are some problems in the centralized structure such as current mismatch losses at PV sting, loss of power of the shaded modules because of bypassing by bypass diodes under PSC, needing to series connection of many PV modules to reach the desired voltage level. In the modular structure, each PV module is connected to a DC-DC converter. In this structure, by increasing the amount of demanded power from the PV string, the number of DC-DC converters that are used at the PV system will increase. As a result, the cost of the modular structure is very high. We can implement the model-based GMPPT through the multi-input interleaved boost DC-DC converter to increase the power extraction from the PV string and reduce hot-spot and current mismatch error in a PV string under different environmental condition and variable load circumstances. The interleaved boost DC-DC converter has many privileges than other mentioned structures, such as high reliability and efficiency, better regulation of DC voltage at DC link, overcome the notable errors such as module's current mismatch and hot spot phenomenon, and power switches voltage stress reduction.

Keywords: solar energy, photovoltaic systems, interleaved boost converter, maximum power point tracking, model-based method, partial shading conditions

Procedia PDF Downloads 101
59 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.

Keywords: cross-validation, importance sampling, information criteria, predictive accuracy

Procedia PDF Downloads 362
58 The Gender Criteria of Film Criticism: Creating the ‘Big’, Avoiding the Important

Authors: Eleni Karasavvidou

Abstract:

Social and anthropological research, parallel to Gender Studies, highlighted the relationship between social structures and symbolic forms as an important field of interaction and recording of 'social trends.' Since the study of representations can contribute to the understanding of the social functions and power relations, they encompass. This ‘mirage,’ however, has not only to do with the representations themselves but also with the ways they are received and the film or critical narratives that are established as dominant or alternative. Cinema and the criticism of its cultural products are no exception. Even in the rapidly changing media landscape of the 21st century, movies remain an integral and widespread part of popular culture, making films an extremely powerful means of 'legitimizing' or 'delegitimizing' visions of domination and commonsensical gender stereotypes throughout society. And yet it is film criticism, the 'language per se,' that legitimizes, reinforces, rewards and reproduces (or at least ignores) the stereotypical depictions of female roles that remain common in the realm of film images. This creates the need for this issue to have emerged (also) in academic research questioning gender criteria in film reviews as part of the effort for an inclusive art and society. Qualitative content analysis is used to examine female roles in selected Oscar-nominated films against their reviews from leading websites and newspapers. This method was chosen because of the complex nature of the depictions in the films and the narratives they evoke. The films were divided into basic scenes depicting social functions, such as love and work relationships, positions of power and their function, which were analyzed by content analysis, with borrowings from structuralism (Gennette) and the local/universal images of intercultural philology (Wierlacher). In addition to the measurement of the general ‘representation-time’ by gender, other qualitative characteristics were also analyzed, such as: speaking time, sayings or key actions, overall quality of the character's action in relation to the development of the scenario and social representations in general, as well as quantitatively (insufficient number of female lead roles, fewer key supporting roles, relatively few female directors and people in the production chain and how they might affect screen representations. The quantitative analysis in this study was used to complement the qualitative content analysis. Then the focus shifted to the criteria of film criticism and to the rhetorical narratives that exclude or highlight in relation to gender identities and functions. In the criteria and language of film criticism, stereotypes are often reproduced or allegedly overturned within the framework of apolitical "identity politics," which mainly addresses the surface of a self-referential cultural-consumer product without connecting it more deeply with the material and cultural life. One of the prime examples of this failure is the Bechtel Test, which tracks whether female characters speak in a film regardless of whether women's stories are represented or not in the films analyzed. If perceived unbiased male filmmakers still fail to tell truly feminist stories, the same is the case with the criteria of criticism and the related interventions.

Keywords: representations, context analysis, reviews, sexist stereotypes

Procedia PDF Downloads 50
57 Non-Timber Forest Products and Livelihood Linkages: A Case of Lamabagar, Nepal

Authors: Sandhya Rijal, Saroj Adhikari, Ramesh R. Pant

Abstract:

Non-Timber Forest Products (NTFPs) have attracted substantial interest in the recent years with the increasing recognition that these can provide essential community needs for improved and diversified rural livelihood and support the objectives of biodiversity conservation. Nevertheless, various challenges are witnessed in their sustainable harvest and management. Assuming that sustainable management with community stewardship can offer one of the solutions to existing challenges, the study assesses the linkages between NTFPs and rural livelihood in Lamabagar village of Dolakha, Nepal. The major objective was to document the status of NTFPs and their contributions in households of Lamabagar. For status documentation, vegetation sampling was done using systematic random sampling technique. 30 plots of 10 m × 10 m were laid down in six parallel transect lines at horizontal distance of 160 m in two different community forests. A structured questionnaire survey was conducted in 76 households (excluding non-response rate) using stratified random sampling technique for contribution analysis. Likewise, key informant interview and focus group discussions were also conducted for data triangulations. 36 different NTFPs were recorded from the vegetation sample in two community forests of which 50% were used for medicinal purposes. The other uses include fodder, religious value, and edible fruits and vegetables. Species like Juniperus indica, Daphne bholua Aconitum spicatum, and Lyonia ovalifolia were frequently used for trade as a source of income, which was sold in local market. The protected species like Taxus wallichiana and Neopicrorhiza scrophulariiflora were also recorded in the area for which the trade is prohibited. The protection of these species urgently needs community stewardship. More than half of the surveyed households (55%) were depending on NTFPs for their daily uses, other than economic purpose whereas 45% of them sold those products in the market directly or in the form of local handmade products as a source of livelihood. NTFPs were the major source of primary health curing agents especially for the poor and unemployed people in the study area. Hence, the NTFPs contributed to livelihood under three different categories: subsistence, supplement income and emergency support, depending upon the economic status of the households. Although the status of forest improved after handover to the user group, the availability of valuable medicinal herbs like Rhododendron anthopogon, Swertia nervosa, Neopicrorhiza scrophulariiflora, and Aconitum spicatum were declining. Inadequacy of technology, lack of easy transport access, and absence of good market facility were the major limitations for external trade of NTFPs in the study site. It was observed that people were interested towards conservation only if they could get some returns: economic in terms of rural settlements. Thus, the study concludes that NTFPs could contribute rural livelihood and support conservation objectives only if local communities are provided with the easy access of technology, market and capital.

Keywords: contribution, medicinal, subsistence, sustainable harvest

Procedia PDF Downloads 98
56 Presence, Distribution and Form of Calcium Oxalate Crystals in Relation to Age of Actinidia Deliciosa Leaves and Petioles

Authors: Muccifora S., Rinallo C., Bellani L.

Abstract:

Calcium (Ca²+) is an element essential to the plant being involved in plant growth and development. At high concentrations, it is toxic and can influence every stage, process and cellular activity of plant life. Given its toxicity, cells implement mechanisms to compartmentalize calcium in a vacuole, endoplasmic reticulum, mitochondria, plastids and cell wall. One of the most effective mechanisms to reduce the excess of calcium, thus avoiding cellular damage, is its complexation with oxalic acid to form calcium oxalate crystals that are no longer osmotically or physiologically active. However, the sequestered calcium can be mobilized when the plant needs it. Calcium crystals can be accumulated in the vacuole of specialized sink-cells called idioblasts, with different crystalline forms (druse, raphyde and styloid) of diverse physiological meanings. Actinidia deliciosa cv. Hayward presents raphydes and styloid localized in idioblasts in cells of photosynthetic and non-photosynthetic tissues. The purpose of this work was to understand if there is a relationship between the age of Actinidia leaves and the presence, distribution, dimension and shape of oxalate crystals by means of light, fluorescent, polarized and transmission electron microscopy. Three vines from female plants were chosen at the beginning of the season and used throughout the study. The leaves with petioles were collected at various stages of development from the bottom to the shoot of the plants monthly from April to July. The samples were taken in corresponding areas of the central and lateral parts of the leaves and of the basal portion of the petiole. The results showed that in the leaves, the number of raphyde idioblasts decreased with the progress of the growing season, while the styloid idioblasts increased progressively, becoming very numerous in the upper nodes of July. In June and in July samples, in the vacuoles of the highest nodes, a portion regular in shape strongly stained with rubeanic acid was present. Moreover, the chlortetracycline (CTC) staining for localization of free calcium marked the wall of the idioblasts and the wall of the cells near vascular bundles. In April petiole samples, moving towards the youngest nodes, the raphydes idioblast decreased in number and in the length of the single raphydes. Besides, crystals stained with rubeanic acid appeared in the vacuoles of some cells. In June samples, numerous raphyde idioblasts oriented parallel to vascular bundles were evident. Under the electron microscope, numerous idioblasts presented not homogeneous electrondense aggregates of material, in which a few crystals (styloids) in the form of regular holes were scattered. In July samples, an increase in the number of styloid idioblasts in the youngest nodes and little masses stained with CTC near styloids were observed. Peculiar cells stained with rubeanic acid were detected and hypothesized to be involved in the formation of the idioblasts. In conclusion, in Actinidia leaves and petioles, it seems to confirm the hypothesis that the formation of styloid idioblasts can be correlated to increasing calcium levels in growing tissues.

Keywords: calcium oxalate crystals, actinidia deliciosa, light and electron microscopy, idioblasts

Procedia PDF Downloads 52
55 Pricing Techniques to Mitigate Recurring Congestion on Interstate Facilities Using Dynamic Feedback Assignment

Authors: Hatem Abou-Senna

Abstract:

Interstate 4 (I-4) is a primary east-west transportation corridor between Tampa and Daytona cities, serving commuters, commercial and recreational traffic. I-4 is known to have severe recurring congestion during peak hours. The congestion spans about 11 miles in the evening peak period in the central corridor area as it is considered the only non-tolled limited access facility connecting the Orlando Central Business District (CBD) and the tourist attractions area (Walt Disney World). Florida officials had been skeptical of tolling I-4 prior to the recent legislation, and the public through the media had been complaining about the excessive toll facilities in Central Florida. So, in search for plausible mitigation to the congestion on the I-4 corridor, this research is implemented to evaluate the effectiveness of different toll pricing alternatives that might divert traffic from I-4 to the toll facilities during the peak period. The network is composed of two main diverging limited access highways, freeway (I-4) and toll road (SR 417) in addition to two east-west parallel toll roads SR 408 and SR 528, intersecting the above-mentioned highways from both ends. I-4 and toll road SR 408 are the most frequently used route by commuters. SR-417 is a relatively uncongested toll road with 15 miles longer than I-4 and $5 tolls compared to no monetary cost on 1-4 for the same trip. The results of the calibrated Orlando PARAMICS network showed that percentages of route diversion vary from one route to another and depends primarily on the travel cost between specific origin-destination (O-D) pairs. Most drivers going from Disney (O1) or Lake Buena Vista (O2) to Lake Mary (D1) were found to have a high propensity towards using I-4, even when eliminating tolls and/or providing real-time information. However, a diversion from I-4 to SR 417 for these OD pairs occurred only in the cases of the incident and lane closure on I-4, due to the increase in delay and travel costs, and when information is provided to travelers. Furthermore, drivers that diverted from I-4 to SR 417 and SR 528 did not gain significant travel-time savings. This was attributed to the limited extra capacity of the alternative routes in the peak period and the longer traveling distance. When the remaining origin-destination pairs were analyzed, average travel time savings on I-4 ranged between 10 and 16% amounting to 10 minutes at the most with a 10% increase in the network average speed. High propensity of diversion on the network increased significantly when eliminating tolls on SR 417 and SR 528 while doubling the tolls on SR 408 along with the incident and lane closure scenarios on I-4 and with real-time information provided. The toll roads were found to be a viable alternative to I-4 for these specific OD pairs depending on the user perception of the toll cost which was reflected in their specific travel times. However, on the macroscopic level, it was concluded that route diversion through toll reduction or elimination on surrounding toll roads would only have a minimum impact on reducing I-4 congestion during the peak period.

Keywords: congestion pricing, dynamic feedback assignment, microsimulation, paramics, route diversion

Procedia PDF Downloads 142
54 Multi-Modality Brain Stimulation: A Treatment Protocol for Tinnitus

Authors: Prajakta Patil, Yash Huzurbazar, Abhijeet Shinde

Abstract:

Aim: To develop a treatment protocol for the management of tinnitus through multi-modality brain stimulation. Methodology: Present study included 33 adults with unilateral (31 subjects) and bilateral (2 subjects) chronic tinnitus with and/or without hearing loss independent of their etiology. The Treatment protocol included 5 consecutive sessions with follow-up of 6 months. Each session was divided into 3 parts: • Pre-treatment: a) Informed consent b) Pitch and loudness matching. • Treatment: Bimanual paper pen task with tinnitus masking for 30 minutes. • Post-treatment: a) Pitch and loudness matching b) Directive counseling and obtaining feedback. Paper-pen task is to be performed bimanually that included carrying out two different writing activities in different context. The level of difficulty of the activities was increased in successive sessions. Narrowband noise of a frequency same as that of tinnitus was presented at 10 dBSL of tinnitus for 30 minutes simultaneously in the ear with tinnitus. Result: The perception of tinnitus was no longer present in 4 subjects while in remaining subjects it reduced to an intensity that its perception no longer troubled them without causing residual facilitation. In all subjects, the intensity of tinnitus decreased by an extent of 45 dB at an average. However, in few subjects, the intensity of tinnitus also decreased by more than 45 dB. The approach resulted in statistically significant reductions in Tinnitus Functional Index and Tinnitus Handicap Inventory scores. The results correlate with pre and post treatment score of Tinnitus Handicap Inventory that dropped from 90% to 0%. Discussion: Brain mapping(qEEG) Studies report that there is multiple parallel overlapping of neural subnetworks in the non-auditory areas of the brain which exhibits abnormal, constant and spontaneous neural activity involved in the perception of tinnitus with each subnetwork and area reflecting a specific aspect of tinnitus percept. The paper pen task and directive counseling are designed and delivered respectively in a way that is assumed to induce normal, rhythmically constant and premeditated neural activity and mask the abnormal, constant and spontaneous neural activity in the above-mentioned subnetworks and the specific non-auditory area. Counseling was focused on breaking the vicious cycle causing and maintaining the presence of tinnitus. Diverting auditory attention alone is insufficient to reduce the perception of tinnitus. Conscious awareness of tinnitus can be suppressed when individuals engage in cognitively demanding tasks of non-auditory nature as the paper pen task used in the present study. To carry out this task selective, divided, sustained, simultaneous and split attention act cumulatively. Bimanual paper pen task represents a top-down activity which underlies brain’s ability to selectively attend to the bimanual written activity as a relevant stimulus and to ignore tinnitus that is the irrelevant stimuli in the present study. Conclusion: The study suggests that this novel treatment approach is cost effective, time saving and efficient to vanish the tinnitus or to reduce the intensity of tinnitus to a negligible level and thereby eliminating the negative reactions towards tinnitus.

Keywords: multi-modality brain stimulation, neural subnetworks, non-auditory areas, paper-pen task, top-down activity

Procedia PDF Downloads 125
53 A Nutrient Formulation Affects Brain Myelination in Infants: An Investigative Randomized Controlled Trial

Authors: N. Schneider, M. Bruchhage, M. Hartweg, G. Mutungi, J. O Regan, S. Deoni

Abstract:

Observational neuroimaging studies suggest differences between breast-fed and formula-fed infants in developmental myelination, a key brain process for learning and cognitive development. However, the possible effects of a nutrient formulation on myelin development in healthy term infants in an intervention study have not been investigated. Objective was, therefore, to investigate the efficacy of a nutrient formulation with higher levels of myelin-relevant nutrients as compared to a control formulation with lower levels of the same nutrients on brain myelination and cognitive development in the first 6 months of life. The study is an ongoing randomized, controlled, double-blind, two-center, parallel-group clinical trial with a nonrandomized, non-blinded arm of exclusively breastfed infants. The current findings result from a staged statistical analysis at 6 months; the recruitment and intervention period has been completed for all participants. Follow-up visits at 12, 18 and 24 months are still ongoing. N= 81 enrolled full term, neurotypical infants of both sexes were randomized into either the investigational (N= 42) or the control group (N= 39), and N= 108 children in the breast-fed arm served as a natural reference group. The effect of a blend of docosahexaenoic acid, arachidonic acid, iron, vitamin B12, folic acid as well as sphingomyelin from a uniquely proceed whey protein concentrate enriched in alpha-lactalbumin and phospholipids in an infant nutrition product matrix was investigated. The main outcomes for the staged statistical analyses at 6 months included brain myelination measures derived from MRI. Additional outcomes were brain volume, cognitive development and safety. The full analyses set at 6 months comprised N= 66 infants. Higher levels of myelin-relevant nutrients compared to lower levels resulted in significant differences in myelin structure, volume, and rate of myelination as early as 3 and 6 months of life. The cross-sectional change of means between groups for whole-brain myelin volume was 8.4% for investigational versus control formulation (3.5% versus the breastfeeding reference) group at 3 months and increased to 36.4% for investigational versus control formulation (14.1% versus breastfeeding reference) at 6 months. No statistically significant differences were detected for early cognition scores. Safety findings were largely similar across groups. This is the first pediatric nutritional neuroimaging study demonstrating the efficacy of a myelin nutrient blend on developmental myelination in well-nourished term infants. Myelination is a critical process in learning and development. The effects were demonstrated across the brain, particularly in temporal and parietal regions, known to be functionally involved in sensory, motor and language skills. These first results add to the field of nutritional neuroscience by demonstrating early life nutrition benefits for brain architecture which may be foundational for later cognitive and behavioral outcomes. ClinicalTrials.gov Identifier: NCT03111927 (Infant Nutrition and Brain Development - Full-Text View - ClinicalTrials.gov).

Keywords: brain development, infant nutrition, MRI, myelination

Procedia PDF Downloads 158
52 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 311
51 The Evolving Changes of Religious Behavior: an Exploratory Study on Guanyin Worship of Contemporary Chinese Societies

Authors: Judith Sue Hwa Joo

Abstract:

Guanyin (Avalokiteśvara in Sanskrit), the Bodhisattva of Mercy and Compassion, is the most widely worshipped Buddhist Divinity in Chinese societies and is also believed by more than half of Asian populations across various countries. The most overwhelming reason for the popularity of Guanyin in Chinese societies is, according to the Lotus Sutra, that Guanyin would apperceive voices of those suffering from immense afflictions and troubles, and liberate them upon crying for his/her holy name with wholeheartedness. Its pervasive social influence has spanned more than two thousand years and is still deeply affecting the lives of most Chinese people. This study aimed to investigate whether Guanyin Worship has evolved and changed in modern Chinese societies across the Taiwan Strait. Taiwan and China, albeit having the same language and culture, have been territorially divided and governed by two different political regimes for over 70 years. It would be scientifically intriguing to unveil any substantial changes in religious behaviors in the context of Guanyin Worship. A comprehensive anonymous questionnaire survey in Chinese communities was conducted from October 2017 to May 2019 across various countries, mostly in China, Taiwan, and Hong Kong areas. Since the religious survey is officially prohibited in China, the study was difficult and could only be exercised by means of snowball sampling. Demographic data (age, sex, education, religious belief) were registered and Guanyin’s salvation functions under various confronting situations were investigated. Psychological dimensions of religious belief in Guanyin were probed in terms of the worship experience, the willingness of veneration, and egoistic or altruistic ideations. A literature review on documented functional attributes was carried out in parallel for comparison analyses with traditional roles. Effective 1123 out of 1139 samples were obtained. Statistical analysis revealed that Guanyin Worship is still commonly practiced and deeply rooted in the hearts of all Chinese people regardless of gender, age, education, and residential area, even though they may not enshrine Guanyin at home nowadays. The conventional roles of Guanyin Bodhisattva are still valid and best satisfy the real interests of lifestyles in modern times. When comparing the traditional Buddhist Sutra and the documented literature, the divine power of modern Guanyin has notably empowered to recover, protect and transform fetal and infant spirits due to the sexual liberation, increased abortion rate, gender awakening and enhanced female autonomy in the reproductive decision. However, the One-Child policy may have critically impacted the trajectory of Guanyin Worship so that people in China prevail over those in Taiwan praying for aborted lives or premature deaths. Furthermore, particularly in Hong Kong and Macao, Guanyin not only serves as the sea guardian for the fishermen but also additional services a new function as the God of Wealth. The divine powers and salvation functions of Guanyin are indeed evolving and expanding to comply with the modern psychosocial, cultural and societal needs. This study sheds light on the modernization process of the two-thousand-year-old Guanyin Worship of contemporary Chinese societies.

Keywords: Buddhism, Guanyin, religious behavior, salvation function

Procedia PDF Downloads 88
50 Cultural Intelligence for the Managers of Tomorrow: A Data-Based Analysis of the Antecedents and Training Needs of Today’s Business School Students

Authors: Justin Byrne, Jose Ramon Cobo

Abstract:

The growing importance of cross- or intercultural competencies (used here interchangeably) for the business and management professionals is now a commonplace in both academic and professional literature. This reflects two parallel developments. On the one hand, it is a consequence of the increased attention paid to a whole range of 'soft skills', now seen as fundamental in both individuals' and corporate success. On the other hand, and more specifically, the increasing demand for interculturally competent professionals is a corollary of ongoing processes of globalization, which multiply and intensify encounters between individuals and companies from different cultural backgrounds. Business schools have, for some decades, responded to the needs of the job market and their own students by providing students with training in intercultural skills, as they are encouraged to do so by the major accreditation agencies on both sides of the Atlantic. Adapting Early and Ang's (2003) formulation of Cultural Intelligence (CQ), this paper aims to help fill the lagunae in the current literature on intercultural training in three main ways. First, it offers an in-depth analysis of the CQ of a little studied group: contemporary Millenial and 'Generation Z' Business School students. The level of analysis distinguishes between the four different dimensions of CQ, cognition, metacognition, motivation and behaviour, and thereby provides a detailed picture of the strengths and weaknesses in CQ of the group as a whole, as well as of different sub-groups and profiles of students. Secondly, by crossing these individual-level findings with respondents' socio-cultural and educational data, this paper also proposes and tests hypotheses regarding the relative impact and importance of four possible antecedents of intercultural skills identified in the literature: prior international experience; intercultural training, foreign language proficiency, and experience of cultural diversity in habitual country of residence. Third, we use this analysis to suggest data-based intercultural training priorities for today's management students. These conclusions are based on the statistical analysis of individual responses of some 300 Bachelor or Masters students in a major European Business School provided to two on-line surveys: Ang, Van Dyne, et al's (2007) standard 20-question self-reporting CQ Scale, and an original questionnaire designed by the authors to collate information on respondent's socio-demographic and educational profile relevant to our four hypotheses and explanatory variables. The data from both instruments was crossed in both descriptive statistical analysis and regression analysis. This research shows that there is no statistically significant and positive relationship between the four antecedents analyzed and overall CQ level. The exception in this respect is the statistically significant correlation between international experience, and the cognitive dimension of CQ. In contrast, the results show that the combination of international experience and foreign language skills acting together, does have a strong overall impact on CQ levels. These results suggest that selecting and/or training students with strong foreign language skills and providing them with international experience (through multinational programmes, academic exchanges or international internships) constitutes one effective way of training culturally intelligent managers of tomorrow.

Keywords: business school, cultural intelligence, millennial, training

Procedia PDF Downloads 128
49 Hydrodynamic Characterisation of a Hydraulic Flume with Sheared Flow

Authors: Daniel Rowe, Christopher R. Vogel, Richard H. J. Willden

Abstract:

The University of Oxford’s recirculating water flume is a combined wave and current test tank with a 1 m depth, 1.1 m width, and 10 m long working section, and is capable of flow speeds up to 1 ms−1 . This study documents the hydrodynamic characteristics of the facility in preparation for experimental testing of horizontal axis tidal stream turbine models. The turbine to be tested has a rotor diameter of 0.6 m and is a modified version of one of two model-scale turbines tested in previous experimental campaigns. An Acoustic Doppler Velocimeter (ADV) was used to measure the flow at high temporal resolution at various locations throughout the flume, enabling the spatial uniformity and turbulence flow parameters to be investigated. The mean velocity profiles exhibited high levels of spatial uniformity at the design speed of the flume, 0.6 ms−1 , with variations in the three-dimensional velocity components on the order of ±1% at the 95% confidence level, along with a modest streamwise acceleration through the measurement domain, a target 5 m working section of the flume. A high degree of uniformity was also apparent for the turbulence intensity, with values ranging between 1-2% across the intended swept area of the turbine rotor. The integral scales of turbulence exhibited a far higher degree of variation throughout the water column, particularly in the streamwise and vertical scales. This behaviour is believed to be due to the high signal noise content leading to decorrelation in the sampling records. To achieve more realistic levels of vertical velocity shear in the flume, a simple procedure to practically generate target vertical shear profiles in open-channel flows is described. Here, the authors arranged a series of non-uniformly spaced parallel bars placed across the width of the flume and normal to the onset flow. By adjusting the resistance grading across the height of the working section, the downstream profiles could be modified accordingly, characterised by changes in the velocity profile power law exponent, 1/n. Considering the significant temporal variation in a tidal channel, the choice of the exponent denominator, n = 6 and n = 9, effectively provides an achievable range around the much-cited value of n = 7 observed at many tidal sites. The resulting flow profiles, which we intend to use in future turbine tests, have been characterised in detail. The results indicate non-uniform vertical shear across the survey area and reveal substantial corner flows, arising from the differential shear between the target vertical and cross-stream shear profiles throughout the measurement domain. In vertically sheared flow, the rotor-equivalent turbulence intensity ranges between 3.0-3.8% throughout the measurement domain for both bar arrangements, while the streamwise integral length scale grows from a characteristic dimension on the order of the bar width, similar to the flow downstream of a turbulence-generating grid. The experimental tests are well-defined and repeatable and serve as a reference for other researchers who wish to undertake similar investigations.

Keywords: acoustic doppler Velocimeter, experimental hydrodynamics, open-channel flow, shear profiles, tidal stream turbines

Procedia PDF Downloads 45
48 Electrical Transport through a Large-Area Self-Assembled Monolayer of Molecules Coupled with Graphene for Scalable Electronic Applications

Authors: Chunyang Miao, Bingxin Li, Shanglong Ning, Christopher J. B. Ford

Abstract:

While it is challenging to fabricate electronic devices close to atomic dimensions in conventional top-down lithography, molecular electronics is promising to help maintain the exponential increase in component densities via using molecular building blocks to fabricate electronic components from the bottom up. It offers smaller, faster, and more energy-efficient electronic and photonic systems. A self-assembled monolayer (SAM) of molecules is a layer of molecules that self-assembles on a substrate. They are mechanically flexible, optically transparent, low-cost, and easy to fabricate. A large-area multi-layer structure has been designed and investigated by the team, where a SAM of designed molecules is sandwiched between graphene and gold electrodes. Each molecule can act as a quantum dot, with all molecules conducting in parallel. When a source-drain bias is applied, significant current flows only if a molecular orbital (HOMO or LUMO) lies within the source-drain energy window. If electrons tunnel sequentially on and off the molecule, the charge on the molecule is well-defined and the finite charging energy causes Coulomb blockade of transport until the molecular orbital comes within the energy window. This produces ‘Coulomb diamonds’ in the conductance vs source-drain and gate voltages. For different tunnel barriers at either end of the molecule, it is harder for electrons to tunnel out of the dot than in (or vice versa), resulting in the accumulation of two or more charges and a ‘Coulomb staircase’ in the current vs voltage. This nanostructure exhibits highly reproducible Coulomb-staircase patterns, together with additional oscillations, which are believed to be attributed to molecular vibrations. Molecules are more isolated than semiconductor dots, and so have a discrete phonon spectrum. When tunnelling into or out of a molecule, one or more vibronic states can be excited in the molecule, providing additional transport channels and resulting in additional peaks in the conductance. For useful molecular electronic devices, achieving the optimum orbital alignment of molecules to the Fermi energy in the leads is essential. To explore it, a drop of ionic liquid is employed on top of the graphene to establish an electric field at the graphene, which screens poorly, gating the molecules underneath. Results for various molecules with different alignments of Fermi energy to HOMO have shown highly reproducible Coulomb-diamond patterns, which agree reasonably with DFT calculations. In summary, this large-area SAM molecular junction is a promising candidate for future electronic circuits. (1) The small size (1-10nm) of the molecules and good flexibility of the SAM lead to the scalable assembly of ultra-high densities of functional molecules, with advantages in cost, efficiency, and power dissipation. (2) The contacting technique using graphene enables mass fabrication. (3) Its well-observed Coulomb blockade behaviour, narrow molecular resonances, and well-resolved vibronic states offer good tuneability for various functionalities, such as switches, thermoelectric generators, and memristors, etc.

Keywords: molecular electronics, Coulomb blokade, electron-phonon coupling, self-assembled monolayer

Procedia PDF Downloads 29
47 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 49
46 Perspective Shifting in the Elicited Language Production Can Defy with Aging

Authors: Tuyuan Cheng

Abstract:

As we age, many things become more difficult. Among the abilities are the linguistic and cognitive ones. Competing theories have shown that these two functions could diminish together or that one is selectively affected by the other. In other words, some proposes aging affects sentence production in the same way it affects sentence comprehension and other cognitive functions, while some argues it does not.To address this question, the current investigation is conducted into the critical aspect of sentences as well as cognitive abilities – the syntactic complexity and the number of perspective shifts being contained in the elicited production. Healthy non-pathological aging is often characterized by a cognitive and neural decline in a number of cognitive abilities. Although the language is assumed to be of the more stable domain, a variety of findings in the cognitive aging literature would suggest otherwise. Older adults often show deficits in language production and multiple aspects of comprehension. Nevertheless, while some age differences likely reflect cognitive decline, others might reflect changes in communicative goals, and some even display cognitive advantages. In the domain of language processing, research efforts have been made in tests that probed a variety of communicative abilities. In general, there exists a distinction: Comprehension seems to be selectively unaffected, while production does not. The current study raises a novel question and investigates whether aging affects the production of relative clauses (RCs) under the cognitive factor of perspective shifts. Based on Perspective Hypothesis (MacWhinney, 2000, 2005), our cognitive processes build upon a fundamental system of perspective-taking, and language provides a series of cues to facilitate the construction and shifting of perspectives. These cues include a wide variety of constructions, including RCs structures. In this regard, linguistic complexity can be determined by the number of perspective shifts, and the processing difficulties of RCs can be interpreted within the theory of perspective shifting. Two experiments were conducted to study language production under controlled conditions. In Experiment 1, older healthy participants were tested on standard measures of cognitive aging, including MMSE (Mini-Mental State Examination), ToMI-2 (a simplified Theory of Mind Inventory-2), and a perspective-shifting comprehension task programmed with E-Prime. The results were analyzed to examine if/how they are correlated with aging people’s subsequent production data. In Experiment 2, the production profile of differing RCs, SRC vs. ORC, were collected with healthy aging participants who perform a picture elicitation task. Variable containing 0, 1, or 2 perspective shifts were juxtaposed respectively to the pictures and counterbalanced presented for elicitation. In parallel, a controlled group of young adults were recruited to examine the linguistic and cognitive abilities in question. The results lead us to the discussion whetheraging affects RCs production in a manner determined by its semantic structure or the number of perspective shifts it contains or the status of participants’ mental understanding. The major findingsare: (1) Elders’ production on Chinese RCtypes did not display intrinsic difficulty asymmetry. (2) RC types (the linguistic structural features) and the cognitiveperspective shifts jointly play important roles in the elders’ RCproduction. (3) The production of RC may defy the aging in the case offlexibly preserved cognitive ability.

Keywords: cognition aging, perspective hypothesis, perspective shift, relative clauses, sentence complexity

Procedia PDF Downloads 88
45 Autophagy Promotes Vascular Smooth Muscle Cell Migration in vitro and in vivo

Authors: Changhan Ouyang, Zhonglin Xie

Abstract:

In response to proatherosclerotic factors such as oxidized lipids, or to therapeutic interventions such as angioplasty, stents, or bypass surgery, vascular smooth muscle cells (VSMCs) migrate from the media to the intima, resulting in intimal hyperplasia, restenosis, graft failure, or atherosclerosis. These proatherosclerotic factors also activate autophagy in VSMCs. However, the functional role of autophagy in vascular health and disease remains poorly understood. In the present study, we determined the role of autophagy in the regulation of VSMC migration. Autophagy activity in cultured human aortic smooth muscle cells (HASMCs) and mouse carotid arteries was measured by Western blot analysis of microtubule-associated protein 1 light chain 3 B (LC3B) and P62. The VSMC migration was determined by scratch wound assay and transwell migration assay. Ex vivo smooth muscle cell migration was determined using aortic ring assay. The in vivo SMC migration was examined by staining the carotid artery sections with smooth muscle alpha actin (alpha SMA) after carotid artery ligation. To examine the relationship between autophagy and neointimal hyperplasia, C57BL/6J mice were subjected to carotid artery ligation. Seven days after injury, protein levels of Atg5, Atg7, Beclin1, and LC3B drastically increased and remained higher in the injured arteries three weeks after the injury. In parallel with the activation of autophagy, vascular injury-induced neointimal hyperplasia as estimated by increased intima/media ratio. The en face staining of carotid artery showed that vascular injury enhanced alpha SMA staining in the intimal cells as compared with the sham operation. Treatment of HASMCs with platelet-derived growth factor (PDGF), one of the major factors for vascular remodeling in response to vascular injury, increased Atg7 and LC3 II protein levels and enhanced autophagosome formation. In addition, aortic ring assay demonstrated that PDGF treated aortic rings displayed an increase in neovessel formation compared with control rings. Whole mount staining for CD31 and alpha SMA in PDGF treated neovessels revealed that the neovessel structures were stained by alpha SMA but not CD31. In contrast, pharmacological and genetic suppression of autophagy inhibits VSMC migration. Especially, gene silencing of Atg7 inhibited VSMC migration induced by PDGF. Furthermore, three weeks after ligation, markedly decreased neointimal formation was found in mice treated with chloroquine, an inhibitor of autophagy. Quantitative morphometric analysis of the injured vessels revealed a marked reduction in the intima/media ratio in the mice treated with chloroquine. Conclusion: Autophagy activation increases VSMC migration while autophagy suppression inhibits VSMC migration. These findings suggest that autophagy suppression may be an important therapeutic strategy for atherosclerosis and intimal hyperplasia.

Keywords: autophagy, vascular smooth muscle cell, migration, neointimal formation

Procedia PDF Downloads 282
44 An Evaluation of a Prototype System for Harvesting Energy from Pressurized Pipeline Networks

Authors: Nicholas Aerne, John P. Parmigiani

Abstract:

There is an increasing desire for renewable and sustainable energy sources to replace fossil fuels. This desire is the result of several factors. First, is the role of fossil fuels in climate change. Scientific data clearly shows that global warming is occurring. It has also been concluded that it is highly likely human activity; specifically, the combustion of fossil fuels, is a major cause of this warming. Second, despite the current surplus of petroleum, fossil fuels are a finite resource and will eventually become scarce and alternatives, such as clean or renewable energy will be needed. Third, operations to obtain fossil fuels such as fracking, off-shore oil drilling, and strip mining are expensive and harmful to the environment. Given these environmental impacts, there is a need to replace fossil fuels with renewable energy sources as a primary energy source. Various sources of renewable energy exist. Many familiar sources obtain renewable energy from the sun and natural environments of the earth. Common examples include solar, hydropower, geothermal heat, ocean waves and tides, and wind energy. Often obtaining significant energy from these sources requires physically-large, sophisticated, and expensive equipment (e.g., wind turbines, dams, solar panels, etc.). Other sources of renewable energy are from the man-made environment. An example is municipal water distribution systems. The movement of water through the pipelines of these systems typically requires the reduction of hydraulic pressure through the use of pressure reducing valves. These valves are needed to reduce upstream supply-line pressures to levels suitable downstream users. The energy associated with this reduction of pressure is significant but is currently not harvested and is simply lost. While the integrity of municipal water supplies is of paramount importance, one can certainly envision means by which this lost energy source could be safely accessed. This paper provides a technical description and analysis of one such means by the technology company InPipe Energy to generate hydroelectricity by harvesting energy from municipal water distribution pressure reducing valve stations. Specifically, InPipe Energy proposes to install hydropower turbines in parallel with existing pressure reducing valves in municipal water distribution systems. InPipe Energy in partnership with Oregon State University has evaluated this approach and built a prototype system at the O. H. Hinsdale Wave Research Lab. The Oregon State University evaluation showed that the prototype system rapidly and safely initiates, maintains, and ceases power production as directed. The outgoing water pressure remained constant at the specified set point throughout all testing. The system replicates the functionality of the pressure reducing valve and ensures accurate control of down-stream pressure. At a typical water-distribution-system pressure drop of 60 psi the prototype, operating at an efficiency 64%, produced approximately 5 kW of electricity. Based on the results of this study, this proposed method appears to offer a viable means of producing significant amounts of clean renewable energy from existing pressure reducing valves.

Keywords: pressure reducing valve, renewable energy, sustainable energy, water supply

Procedia PDF Downloads 175
43 Simple Finite-Element Procedure for Modeling Crack Propagation in Reinforced Concrete Bridge Deck under Repetitive Moving Truck Wheel Loads

Authors: Rajwanlop Kumpoopong, Sukit Yindeesuk, Pornchai Silarom

Abstract:

Modeling cracks in concrete is complicated by its strain-softening behavior which requires the use of sophisticated energy criteria of fracture mechanics to assure stable and convergent solutions in the finite-element (FE) analysis particularly for relatively large structures. However, for small-scale structures such as beams and slabs, a simpler approach relies on retaining some shear stiffness in the cracking plane has been adopted in literature to model the strain-softening behavior of concrete under monotonically increased loading. According to the shear retaining approach, each element is assumed to be an isotropic material prior to cracking of concrete. Once an element is cracked, the isotropic element is replaced with an orthotropic element in which the new orthotropic stiffness matrix is formulated with respect to the crack orientation. The shear transfer factor of 0.5 is used in parallel to the crack plane. The shear retaining approach is adopted in this research to model cracks in RC bridge deck with some modifications to take into account the effect of repetitive moving truck wheel loads as they cause fatigue cracking of concrete. First modification is the introduction of fatigue tests of concrete and reinforcing steel and the Palmgren-Miner linear criterion of cumulative damage in the conventional FE analysis. For a certain loading, the number of cycles to failure of each concrete or RC element can be calculated from the fatigue or S-N curves of concrete and reinforcing steel. The elements with the minimum number of cycles to failure are the failed elements. For the elements that do not fail, the damage is accumulated according to Palmgren-Miner linear criterion of cumulative damage. The stiffness of the failed element is modified and the procedure is repeated until the deck slab fails. The total number of load cycles to failure of the deck slab can then be obtained from which the S-N curve of the deck slab can be simulated. Second modification is the modification in shear transfer factor. Moving loading causes continuous rubbing of crack interfaces which greatly reduces shear transfer mechanism. It is therefore conservatively assumed in this study that the analysis is conducted with shear transfer factor of zero for the case of moving loading. A customized FE program has been developed using the MATLAB software to accomodate such modifications. The developed procedure has been validated with the fatigue test of the 1/6.6-scale AASHTO bridge deck under the applications of both fixed-point repetitive loading and moving loading presented in the literature. Results are in good agreement both experimental vs. simulated S-N curves and observed vs. simulated crack patterns. Significant contribution of the developed procedure is a series of S-N relations which can now be simulated at any desired levels of cracking in addition to the experimentally derived S-N relation at the failure of the deck slab. This permits the systematic investigation of crack propagation or deterioration of RC bridge deck which is appeared to be useful information for highway agencies to prolong the life of their bridge decks.

Keywords: bridge deck, cracking, deterioration, fatigue, finite-element, moving truck, reinforced concrete

Procedia PDF Downloads 219
42 Protected Cultivation of Horticultural Crops: Increases Productivity per Unit of Area and Time

Authors: Deepak Loura

Abstract:

The most contemporary method of producing horticulture crops both qualitatively and quantitatively is protected cultivation, or greenhouse cultivation, which has gained widespread acceptance in recent decades. Protected farming, commonly referred to as controlled environment agriculture (CEA), is extremely productive, land- and water-wise, as well as environmentally friendly. The technology entails growing horticulture crops in a controlled environment where variables such as temperature, humidity, light, soil, water, fertilizer, etc. are adjusted to achieve optimal output and enable a consistent supply of them even during the off-season. Over the past ten years, protected cultivation of high-value crops and cut flowers has demonstrated remarkable potential. More and more agricultural and horticultural crop production systems are moving to protected environments as a result of the growing demand for high-quality products by global markets. By covering the crop, it is possible to control the macro- and microenvironments, enhancing plant performance and allowing for longer production times, earlier harvests, and higher yields of higher quality. These shielding features alter the environment of the plant while also offering protection from wind, rain, and insects. Protected farming opens up hitherto unexplored opportunities in agriculture as the liberalised economy and improved agricultural technologies advance. Typically, the revenues from fruit, vegetable, and flower crops are 4 to 8 times higher than those from other crops. If any of these high-value crops are cultivated in protected environments like greenhouses, net houses, tunnels, etc., this profit can be multiplied. Vegetable and cut flower post-harvest losses are extremely high (20–0%), however sheltered growing techniques and year-round cropping can greatly minimize post-harvest losses and enhance yield by 5–10 times. Seasonality and weather have a big impact on the production of vegetables and flowers. The variety of their products results in significant price and quality changes for vegetables. For the application of current technology in crop production, achieving a balance between year-round availability of vegetables and flowers with minimal environmental impact and remaining competitive is a significant problem. The future of agriculture will be protected since population growth is reducing the amount of land that may be held. Protected agriculture is a particularly profitable endeavor for tiny landholdings. Small greenhouses, net houses, nurseries, and low tunnel greenhouses can all be built by farmers to increase their income. Protected agriculture is also aided by the rise in biotic and abiotic stress factors. As a result of the greater productivity levels, these technologies are not only opening up opportunities for producers with larger landholdings, but also for those with smaller holdings. Protected cultivation can be thought of as a kind of precise, forward-thinking, parallel agriculture that covers almost all aspects of farming and is rather subject to additional inspection for technical applicability to circumstances, farmer economics, and market economics.

Keywords: protected cultivation, horticulture, greenhouse, vegetable, controlled environment agriculture

Procedia PDF Downloads 49
41 Accurate Energy Assessment Technique for Mine-Water District Heat Network

Authors: B. Philip, J. Littlewood, R. Radford, N. Evans, T. Whyman, D. P. Jones

Abstract:

UK buildings and energy infrastructures are heavily dependent on natural gas, a large proportion of which is used for domestic space heating. However, approximately half of the gas consumed in the UK is imported. Improving energy security and reducing carbon emissions are major government drivers for reducing gas dependency. In order to do so there needs to be a wholesale shift in the energy provision to householders without impacting on thermal comfort levels, convenience or cost of supply to the end user. Heat pumps are seen as a potential alternative in modern well insulated homes, however, can the same be said of older homes? A large proportion of housing stock in Britain was built prior to 1919. The age of the buildings bears testimony to the quality of construction; however, their thermal performance falls far below the minimum currently set by UK building standards. In recent years significant sums of money have been invested to improve energy efficiency and combat fuel poverty in some of the most deprived areas of Wales. Increasing energy efficiency of older properties remains a significant challenge, which cannot be achieved through insulation and air-tightness interventions alone, particularly when alterations to historically important architectural features of the building are not permitted. This paper investigates the energy demand of pre-1919 dwellings in a former Welsh mining village, the feasibility of meeting that demand using water from the disused mine workings to supply a district heat network and potential barriers to success of the scheme. The use of renewable solar energy generation and storage technologies, both thermal and electrical, to reduce the load and offset increased electricity demand, are considered. A wholistic surveying approach to provide a more accurate assessment of total household heat demand is proposed. Several surveying techniques, including condition surveys, air permeability, heat loss calculations, and thermography were employed to provide a clear picture of energy demand. Additional insulation can bring unforeseen consequences which are detrimental to the fabric of the building, potentially leading to accelerated dilapidation of the asset being ‘protected’. Increasing ventilation should be considered in parallel, to compensate for the associated reduction in uncontrolled infiltration. The effectiveness of thermal performance improvements are demonstrated and the detrimental effects of incorrect material choice and poor installation are highlighted. The findings show estimated heat demand to be in close correlation to household energy bills. Major areas of heat loss were identified such that improvements to building thermal performance could be targeted. The findings demonstrate that the use of heat pumps in older buildings is viable, provided sufficient improvement to thermal performance is possible. Addition of passive solar thermal and photovoltaic generation can help reduce the load and running cost for the householder. The results were used to predict future heat demand following energy efficiency improvements, thereby informing the size of heat pumps required.

Keywords: heat demand, heat pump, renewable energy, retrofit

Procedia PDF Downloads 70
40 Ragging and Sludging Measurement in Membrane Bioreactors

Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd

Abstract:

Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.

Keywords: clogging, membrane bioreactors, ragging, sludge

Procedia PDF Downloads 150
39 Advantages of Matrix Solid Phase Dispersive (MSPD) Extraction Associated to MIPS versus MAE Liquid Extraction for the Simultaneous Analysis of PAHs, PCBs and Some Hydroxylated PAHs in Sediments

Authors: F. Portet-Koltalo, Y. Tian, I. Berger, C. Boulanger-Lecomte, A. Benamar, N. Machour

Abstract:

Sediments are complex environments which can accumulate a great variety of persistent toxic contaminants such as polychlorobiphenyles (PCBs), polycyclic aromatic hydrocarbons (PAHs) and some of their more toxic degradation metabolites such as hydroxylated PAHs (OH-PAHs). Owing to their composition, fine clayey sediments can be more difficult to extract than soils using conventional solvent extraction processes. So this study aimed to compare the potential of MSPD (matrix solid phase dispersive extraction) to extract PCBs, PAHs and OH-PAHs, in comparison with microwave assisted extraction (MAE). Methodologies: MAE extraction with various solvent mixtures was used to extract PCBs, PAHs and OH-PAHs from sediments in two runs, followed by two GC-MS analyses. MSPD consisted in crushing the dried sediment with dispersive agents, introducing the mixture in cartridges and eluting the target compounds with an appropriate volume of selected solvents. So MSPD combined with cartridges containing MIPs (molecularly imprinted polymers) designed for OH-PAHs was used to extract the three families of target compounds in only one run, followed by parallel analyses in GC-MS for PAHs/PCBs and HPLC-FLD for OH-PAHs. Results: MAE extraction was optimized to extract from clayey sediments, in two runs, PAHs/PCBs in one hand and OH-PAHs in the other hand. Indeed, the best conditions of extractions (mixtures of extracting solvents, temperature) were different if we consider the polarity and the thermodegradability of the different families of target contaminants: PAHs/PCBs were better extracted using an acetone/toluene 50/50 mixture at 130°C whereas OH-PAHs were better extracted using an acetonitrile/toluene 90/10 mixture at 100°C. Moreover, the two consecutive GC-MS analyses contributed to double the total analysis time. A matrix solid phase dispersive (MSPD) extraction procedure was also optimized, with the first objective of increasing the extraction recovery yields of PAHs and PCBs from fine-grained sediment. The crushing time (2-10 min), the nature of the dispersing agents added for purifying and increasing the extraction yields (Florisil, octadecylsilane, 3-chloropropyle, 4-benzylchloride), the nature and the volume of eluting solvents (methylene chloride, hexane, hexane/acetone…) were studied. It appeared that in the best conditions, MSPD was a better extraction method than MAE for PAHs and PCBs, with respectively, mean increases of 8.2% and 71%. This method was also faster, easier and less expensive. But the other advantage of MSPD was that it allowed to introduce easily, just after the first elution process of PAHs/PCBs, a step permitting the selective recovery of OH-PAHs. A cartridge containing MIPs designed for phenols was coupled to the cartridge containing the dispersed sediment, and various eluting solvents, different from those used for PAHs and PCBs, were tested to selectively concentrate and extract OH-PAHs. Thereafter OH-PAHs could be analyzed at the same time than PAHs and PCBs: the OH-PAH extract could be analyzed with HPLC-FLD, whereas the PAHs/PCBs extract was analyzed with GC-MS, adding only few minutes more to the total duration of the analytical process. Conclusion: MSPD associated to MIPs appeared to be an easy, fast and low expensive method, able to extract in one run a complex mixture of toxic apolar and more polar contaminants present in clayey fine-grained sediments, an environmental matrix which is generally difficult to analyze.

Keywords: contaminated fine-grained sediments, matrix solid phase dispersive extraction, microwave assisted extraction, molecularly imprinted polymers, multi-pollutant analysis

Procedia PDF Downloads 318
38 Two Component Source Apportionment Based on Absorption and Size Distribution Measurement

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Gábor Szabó, Zoltán Bozóki

Abstract:

Beyond its climate and health related issues ambient light absorbing carbonaceous particulate matter (LAC) has also become a great scientific interest in terms of its regulations recently. It has been experimentally demonstrated in recent studies, that LAC is dominantly composed of traffic and wood burning aerosol particularly under wintertime urban conditions, when the photochemical and biological activities are negligible. Several methods have been introduced to quantitatively apportion aerosol fractions emitted by wood burning and traffic but most of them require costly and time consuming off-line chemical analysis. As opposed to chemical features, the microphysical properties of airborne particles such as optical absorption and size distribution can be easily measured on-line, with high accuracy and sensitivity, especially under highly polluted urban conditions. Recently a new method has been proposed for the apportionment of wood burning and traffic aerosols based on the spectral dependence of their absorption quantified by the Aerosol Angström Exponent (AAE). In this approach the absorption coefficient is deduced from transmission measurement on a filter accumulated aerosol sample and the conversion factor between the measured optical absorption and the corresponding mass concentration (the specific absorption cross section) are determined by on-site chemical analysis. The recently developed multi-wavelength photoacoustic instruments provide novel, in-situ approach towards the reliable and quantitative characterization of carbonaceous particulate matter. Therefore, it also opens up novel possibilities on the source apportionment through the measurement of light absorption. In this study, we demonstrate an in-situ spectral characterization method of the ambient carbon fraction based on light absorption and size distribution measurements using our state-of-the-art multi-wavelength photoacoustic instrument (4λ-PAS) and Single Mobility Particle Sizer (SMPS) The carbonaceous particulate selective source apportionment study was performed for ambient particulate matter in the city center of Szeged, Hungary where the dominance of traffic and wood burning aerosol has been experimentally demonstrated earlier. The proposed model is based on the parallel, in-situ measurement of optical absorption and size distribution. AAEff and AAEwb were deduced from the measured data using the defined correlation between the AOC(1064nm)/AOC(266nm) and N100/N20 ratios. σff(λ) and σwb(λ) were determined with the help of the independently measured temporal mass concentrations in the PM1 mode. Furthermore, the proposed optical source apportionment is based on the assumption that the light absorbing fraction of PM is exclusively related to traffic and wood burning. This assumption is indirectly confirmed here by the fact that the measured size distribution is composed of two unimodal size distributions identified to correspond to traffic and wood burning aerosols. The method offers the possibility of replacing laborious chemical analysis with simple in-situ measurement of aerosol size distribution data. The results by the proposed novel optical absorption based source apportionment method prove its applicability whenever measurements are performed at an urban site where traffic and wood burning are the dominant carbonaceous sources of emission.

Keywords: absorption, size distribution, source apportionment, wood burning, traffic aerosol

Procedia PDF Downloads 205
37 A Textile-Based Scaffold for Skin Replacements

Authors: Tim Bolle, Franziska Kreimendahl, Thomas Gries, Stefan Jockenhoevel

Abstract:

The therapeutic treatment of extensive, deep wounds is limited. Autologous split-skin grafts are used as a so-called ‘gold standard’. Most common deficits are the defects at the donor site, the risk of scarring as well as the limited availability and quality of the autologous grafts. The aim of this project is a tissue engineered dermal-epidermal skin replacement to overcome the limitations of the gold standard. A key requirement for the development of such a three-dimensional implant is the formation of a functional capillary-like network inside the implant to ensure a sufficient nutrient and gas supply. Tailored three-dimensional warp knitted spacer fabrics are used to reinforce the mechanically week fibrin gel-based scaffold and further to create a directed in vitro pre-vascularization along the parallel-oriented pile yarns within a co-culture. In this study various three-dimensional warp knitted spacer fabrics were developed in a factorial design to analyze the influence of the machine parameters such as the stitch density and the pattern of the fabric on the scaffold performance and further to determine suitable parameters for a successful fibrin gel-incorporation and a physiological performance of the scaffold. The fabrics were manufactured on a Karl Mayer double-bar raschel machine DR 16 EEC/EAC. A fine machine gauge of E30 was used to ensure a high pile yarn density for sufficient nutrient, gas and waste exchange. In order to ensure a high mechanical stability of the graft, the fabrics were made of biocompatible PVDF yarns. Key parameters such as the pore size, porosity and stress/strain behavior were investigated under standardized, controlled climate conditions. The influence of the input parameters on the mechanical and morphological properties as well as the ability of fibrin gel incorporation into the spacer fabric was analyzed. Subsequently, the pile yarns of the spacer fabrics were colonized with Human Umbilical Vein Endothelial Cells (HUVEC) to analyze the ability of the fabric to further function as a guiding structure for a directed vascularization. The cells were stained with DAPI and investigated using fluorescence microscopy. The analysis revealed that the stitch density and the binding pattern have a strong influence on both the mechanical and morphological properties of the fabric. As expected, the incorporation of the fibrin gel was significantly improved with higher pore sizes and porosities, whereas the mechanical strength decreases. Furthermore, the colonization trials revealed a high cell distribution and density on the pile yarns of the spacer fabrics. For a tailored reinforcing structure, the minimum porosity and pore size needs to be evaluated which still ensures a complete incorporation of the reinforcing structure into the fibrin gel matrix. That will enable a mechanically stable dermal graft with a dense vascular network for a sufficient nutrient and oxygen supply of the cells. The results are promising for subsequent research in the field of reinforcing mechanically weak biological scaffolds and develop functional three-dimensional scaffolds with an oriented pre-vascularization.

Keywords: fibrin-gel, skin replacement, spacer fabric, pre-vascularization

Procedia PDF Downloads 229
36 A Peg Board with Photo-Reflectors to Detect Peg Insertion and Pull-Out Moments

Authors: Hiroshi Kinoshita, Yasuto Nakanishi, Ryuhei Okuno, Toshio Higashi

Abstract:

Various kinds of pegboards have been developed and used widely in research and clinics of rehabilitation for evaluation and training of patient’s hand function. A common measure in these peg boards is a total time of performance execution assessed by a tester’s stopwatch. Introduction of electrical and automatic measurement technology to the apparatus, on the other hand, has been delayed. The present work introduces the development of a pegboard with an electric sensor to detect moments of individual peg’s insertion and removal. The work also gives fundamental data obtained from a group of healthy young individuals who performed peg transfer tasks using the pegboard developed. Through trails and errors in pilot tests, two 10-hole peg-board boxes installed with a small photo-reflector and a DC amplifier at the bottom of each hole were designed and built by the present authors. The amplified electric analogue signals from the 20 reflectors were automatically digitized at 500 Hz per channel, and stored in a PC. The boxes were set on a test table at different distances (25, 50, 75, and 125 mm) in parallel to examine the effect of hole-to-hole distance. Fifty healthy young volunteers (25 in each gender) as subjects of the study performed successive fast 80 time peg transfers at each distance using their dominant and non-dominant hands. The data gathered showed a clear-cut light interruption/continuation moment by the pegs, allowing accurately (no tester’s error involved) and precisely (an order of milliseconds) to determine the pull out and insertion times of each peg. This further permitted computation of individual peg movement duration (PMD: from peg-lift-off to insertion) apart from hand reaching duration (HRD: from peg insertion to lift-off). An accidental drop of a peg led to an exceptionally long ( < mean + 3 SD) PMD, which was readily detected from an examination of data distribution. The PMD data were commonly right-skewed, suggesting that the median can be a better estimate of individual PMD than the mean. Repeated measures ANOVA using the median values revealed significant hole-to-hole distance, and hand dominance effects, suggesting that these need to be fixed in the accurate evaluation of PMD. The gender effect was non-significant. Performance consistency was also evaluated by the use of quartile variation coefficient values, which revealed no gender, hole-to-hole, and hand dominance effects. The measurement reliability was further examined using interclass correlation obtained from 14 subjects who performed the 25 and 125 mm hole distance tasks at two 7-10 days separate test sessions. Inter-class correlation values between the two tests showed fair reliability for PMD (0.65-0.75), and for HRD (0.77-0.94). We concluded that a sensor peg board developed in the present study could provide accurate (excluding tester’s errors), and precise (at a millisecond rate) time information of peg movement separated from that used for hand movement. It could also easily detect and automatically exclude erroneous execution data from his/her standard data. These would lead to a better evaluation of hand dexterity function compared to the widely used conventional used peg boards.

Keywords: hand, dexterity test, peg movement time, performance consistency

Procedia PDF Downloads 102
35 South African Breast Cancer Mutation Spectrum: Pitfalls to Copy Number Variation Detection Using Internationally Designed Multiplex Ligation-Dependent Probe Amplification and Next Generation Sequencing Panels

Authors: Jaco Oosthuizen, Nerina C. Van Der Merwe

Abstract:

The National Health Laboratory Services in Bloemfontien has been the diagnostic testing facility for 1830 patients for familial breast cancer since 1997. From the cohort, 540 were comprehensively screened using High-Resolution Melting Analysis or Next Generation Sequencing for the presence of point mutations and/or indels. Approximately 90% of these patients stil remain undiagnosed as they are BRCA1/2 negative. Multiplex ligation-dependent probe amplification was initially added to screen for copy number variation detection, but with the introduction of next generation sequencing in 2017, was substituted and is currently used as a confirmation assay. The aim was to investigate the viability of utilizing internationally designed copy number variation detection assays based on mostly European/Caucasian genomic data for use within a South African context. The multiplex ligation-dependent probe amplification technique is based on the hybridization and subsequent ligation of multiple probes to a targeted exon. The ligated probes are amplified using conventional polymerase chain reaction, followed by fragment analysis by means of capillary electrophoresis. The experimental design of the assay was performed according to the guidelines of MRC-Holland. For BRCA1 (P002-D1) and BRCA2 (P045-B3), both multiplex assays were validated, and results were confirmed using a secondary probe set for each gene. The next generation sequencing technique is based on target amplification via multiplex polymerase chain reaction, where after the amplicons are sequenced parallel on a semiconductor chip. Amplified read counts are visualized as relative copy numbers to determine the median of the absolute values of all pairwise differences. Various experimental parameters such as DNA quality, quantity, and signal intensity or read depth were verified using positive and negative patients previously tested internationally. DNA quality and quantity proved to be the critical factors during the verification of both assays. The quantity influenced the relative copy number frequency directly whereas the quality of the DNA and its salt concentration influenced denaturation consistency in both assays. Multiplex ligation-dependent probe amplification produced false positives due to ligation failure when ligation was inhibited due to a variant present within the ligation site. Next generation sequencing produced false positives due to read dropout when primer sequences did not meet optimal multiplex binding kinetics due to population variants in the primer binding site. The analytical sensitivity and specificity for the South African population have been proven. Verification resulted in repeatable reactions with regards to the detection of relative copy number differences. Both multiplex ligation-dependent probe amplification and next generation sequencing multiplex panels need to be optimized to accommodate South African polymorphisms present within the genetically diverse ethnic groups to reduce the false copy number variation positive rate and increase performance efficiency.

Keywords: familial breast cancer, multiplex ligation-dependent probe amplification, next generation sequencing, South Africa

Procedia PDF Downloads 185