Search results for: passive non-prehensile manipulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1063

Search results for: passive non-prehensile manipulation

193 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna

Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov

Abstract:

This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.

Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna

Procedia PDF Downloads 283
192 Development of Methotrexate Nanostructured Lipid Carriers for Topical Treatment of Psoriasis: Optimization, Evaluation, and in vitro Studies

Authors: Yogeeta O. Agrawal, Hitendra S. Mahajan, Sanjay J. Surana

Abstract:

Methotrexate is effective in controlling recalcitrant psoriasis when administered by the oral or parenteral route long-term. However, the systematic use of this drug may provoke any of a number of side effects, notably hepatotoxic effects. To reduce these effects, clinical studies have been done with topical MTx. It is useful in treating a number of cutaneous conditions, including psoriasis. A major problem in topical administration of MTx currently available in market is that the drug is hydrosoluble and is mostly in the dissociated form at physiological pH. Its capacity for passive diffusion is thus limited. Localization of MTx in effected layers of skin is likely to improve the role of topical dosage form of the drug as a supplementary to oral therapy for treatment of psoriasis. One of the possibilities for increasing the penetration of drugs through the skin is the use of Nanostructured lipid Carriers. The objective of the present study was to formulate and characterize Methotrexate loaded Nanostructured Lipid Carriers (MtxNLCs), to understand in vitro drug release and evaluate the role of the developed gel in the topical treatment of psoriasis. MtxNLCs were prepared by solvent diffusion technique using 3(2) full factorial design.The mean diameter and surface morphology of MtxNLC was evaluated. MtxNLCs were lyophilized and crystallinity of NLC was characterized by Differential Scanning Calorimtery (DSC) and powder X-Ray Diffraction (XRD). The NLCs were incorporated in 1% w/w Carbopol 934 P gel base and in vitro skin deposition studies in Human Cadaver Skin were conducted. The optimized MtxNLCs were spherical in shape, with average particle size of 253(±9.92)nm, zeta potential of -30.4 (±0.86) mV and EE of 53.12(±1.54)%. DSC and XRD data confirmed the formation of NLCs. Significantly higher deposition of Methotrexate was found in human cadaver skin from MtxNLC gel (71.52 ±1.23%) as compared to Mtx plain gel (54.28±1.02%). Findings of the studies suggest that there is significant improvement in therapeutic index in treatment of psoriasis by MTx-NLCs incorporated gel base developed in this investigation over plain drug gel currently available in the market.

Keywords: methotrexate, psoriasis, NLCs, hepatotoxic effects

Procedia PDF Downloads 430
191 Peer Bullying and Mentalization from the Perspective of Pupils

Authors: Anna Siegler

Abstract:

Bullying among peers is not uncommon; however, adults can notice only a fragment of the cases of harassment during everyday life. The systemic approaches of bullying investigation put the whole school community in the focus of attention and propose that the solution should emerge from the culture of the school. Bystanders are essential in the prevention and intervention processes as an active agent rather than passive. For combating exclusion, stigmatization and harassment, it is important that the bystanders have to realize they have the power to take action. To prevent the escalation of violence, victims must believe that students and teachers will help them and their environment is able to provide safety. The study based on scientific narrative psychological approach, and focuses on the examination of the different perspectives of students, how peers are mentalizing with each other in case of bullying. The data collection contained responses of students (N = 138) from three schools in Hungary, and from three different area of the country (Budapest, Martfű and Barcs). The test battery include Bullying Prevalence Questionnaire, Interpersonal Reactivity Index and an instruction to get narratives about bullying, which effectiveness was tested during a pilot test. The obtained results are in line with the findings of previous bullying research: the victims are mentalizing less with their peers and experience greater personal distress when they are in identity threatening situations, thus focusing on their own difficulties rather than social signals. This isolation is an adaptive response in short-term although it seems to lead to a deficit in social skills later in life and makes it difficult for students to become socially integrated to society. In addition the results also show that students use more mental state attribution when they report verbal bullying than in case of physical abuse. Those who witness physical harassment also witness concrete answers to the problem from teachers, in contrast verbal abuse often stays without consequences. According to the results students mentalizing more in these stories because they have less normative explanation to what happened. To expanding bullying literature, this research helps to find ways to reduce school violence through community development.

Keywords: bullying, mentalization, narrative, school culture

Procedia PDF Downloads 164
190 Broad Survey of Fine Root Traits to Investigate the Root Economic Spectrum Hypothesis and Plant-Fire Dynamics Worldwide

Authors: Jacob Lewis Watts, Adam F. A. Pellegrini

Abstract:

Prairies, grasslands, and forests cover an expansive portion of the world’s surface and contribute significantly to Earth’s carbon cycle. The largest driver of carbon dynamics in some of these ecosystems is fire. As the global climate changes, most fire-dominated ecosystems will experience increased fire frequency and intensity, leading to increased carbon flux into the atmosphere and soil nutrient depletion. The plant communities associated with different fire regimes are important for reassimilation of carbon lost during fire and soil recovery. More frequent fires promote conservative plant functional traits aboveground; however, belowground fine root traits are poorly explored and arguably more important drivers of ecosystem function as the primary interface between the soil and plant. The root economic spectrum (RES) hypothesis describes single-dimensional covariation between important fine-root traits along a range of plant strategies from acquisitive to conservative – parallel to the well-established leaf economic spectrum (LES). However, because of the paucity of root trait data, the complex nature of the rhizosphere, and the phylogenetic conservatism of root traits, it is unknown whether the RES hypothesis accurately describes plant nutrient and water acquisition strategies. This project utilizesplants grown in common garden conditions in the Cambridge University Botanic Garden and a meta-analysis of long-term fire manipulation experiments to examine the belowground physiological traits of fire-adapted and non-fire-adapted herbaceous species to 1) test the RES hypothesis and 2) describe the effect of fire regimes on fine root functional traits – which in turn affect carbon and nutrient cycling. A suite of morphological, chemical, and biological root traits (e.g. root diameter, specific root length, percent N, percent mycorrhizal colonization, etc.) of 50 herbaceous species were measuredand tested for phylogenetic conservatism and RES dimensionality. Fire-adapted and non-fire-adapted plants traits were compared using phylogenetic PCA techniques. Preliminary evidence suggests that phylogenetic conservatism may weaken the single-dimensionality of the RES, suggesting that there may not be a single way that plants optimize nutrient and water acquisition and storage in the complex rhizosphere; additionally, fire-adapted species are expected to be more conservative than non-fire-adapted species, which may be indicative of slower carbon cycling with increasing fire frequency and intensity.

Keywords: climate change, fire regimes, root economic spectrum, fine roots

Procedia PDF Downloads 123
189 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 144
188 Economic Decision Making under Cognitive Load: The Role of Numeracy and Financial Literacy

Authors: Vânia Costa, Nuno De Sá Teixeira, Ana C. Santos, Eduardo Santos

Abstract:

Financial literacy and numeracy have been regarded as paramount for rational household decision making in the increasing complexity of financial markets. However, financial decisions are often made under sub-optimal circumstances, including cognitive overload. The present study aims to clarify how financial literacy and numeracy, taken as relevant expert knowledge for financial decision-making, modulate possible effects of cognitive load. Participants were required to perform a choice between a sure loss or a gambling pertaining a financial investment, either with or without a competing memory task. Two experiments were conducted varying only the content of the competing task. In the first, the financial choice task was made while maintaining on working memory a list of five random letters. In the second, cognitive load was based upon the retention of six random digits. In both experiments, one of the items in the list had to be recalled given its serial position. Outcomes of the first experiment revealed no significant main effect or interactions involving cognitive load manipulation and numeracy and financial literacy skills, strongly suggesting that retaining a list of random letters did not interfere with the cognitive abilities required for financial decision making. Conversely, and in the second experiment, a significant interaction between the competing mnesic task and level of financial literacy (but not numeracy) was found for the frequency of choice of a gambling option. Overall, and in the control condition, both participants with high financial literacy and high numeracy were more prone to choose the gambling option. However, and when under cognitive load, participants with high financial literacy were as likely as their illiterate counterparts to choose the gambling option. This outcome is interpreted as evidence that financial literacy prevents intuitive risk-aversion reasoning only under highly favourable conditions, as is the case when no other task is competing for cognitive resources. In contrast, participants with higher levels of numeracy were consistently more prone to choose the gambling option in both experimental conditions. These results are discussed in the light of the opposition between classical dual-process theories and fuzzy-trace theories for intuitive decision making, suggesting that while some instances of expertise (as numeracy) are prone to support easily accessible gist representations, other expert skills (as financial literacy) depend upon deliberative processes. It is furthermore suggested that this dissociation between types of expert knowledge might depend on the degree to which they are generalizable across disparate settings. Finally, applied implications of the present study are discussed with a focus on how it informs financial regulators and the importance and limits of promoting financial literacy and general numeracy.

Keywords: decision making, cognitive load, financial literacy, numeracy

Procedia PDF Downloads 182
187 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy

Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury

Abstract:

Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.

Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling

Procedia PDF Downloads 73
186 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time

Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla

Abstract:

Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.

Keywords: fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time

Procedia PDF Downloads 177
185 Small-Group Case-Based Teaching: Effects on Student Achievement, Critical Thinking, and Attitude toward Chemistry

Authors: Reynante E. Autida, Maria Ana T. Quimbo

Abstract:

The chemistry education curriculum provides an excellent avenue where students learn the principles and concepts in chemistry and at the same time, as a central science, better understand related fields. However, the teaching approach used by teachers affects student learning. Cased-based teaching (CBT) is one of the various forms of inductive method. The teacher starts with specifics then proceeds to the general principles. The students’ role in inductive learning shifts from being passive in the traditional approach to being active in learning. In this paper, the effects of Small-Group Case-Based Teaching (SGCBT) on college chemistry students’ achievement, critical thinking, and attitude toward chemistry including the relationships between each of these variables were determined. A quasi-experimental counterbalanced design with pre-post control group was used to determine the effects of SGCBT on Engineering students of four intact classes (two treatment groups and two control groups) in one of the State Universities in Mindanao. The independent variables are the type of teaching approach (SGCBT versus pure lecture-discussion teaching or PLDT) while the dependent variables are chemistry achievement (exam scores) and scores in critical thinking and chemistry attitude. Both Analysis of Covariance (ANCOVA) and t-tests (within and between groups and gain scores) were used to compare the effects of SGCBT versus PLDT on students’ chemistry achievement, critical thinking, and attitude toward chemistry, while Pearson product-moment correlation coefficients were calculated to determine the relationships between each of the variables. Results show that the use of SGCBT fosters positive attitude toward chemistry and provides some indications as well on improved chemistry achievement of students compared with PLDT. Meanwhile, the effects of PLDT and SGCBT on critical thinking are comparable. Furthermore, correlational analysis and focus group interviews indicate that the use of SGCBT not only supports development of positive attitude towards chemistry but also improves chemistry achievement of students. Implications are provided in view of the recent findings on SGCBT and topics for further research are presented as well.

Keywords: case-based teaching, small-group learning, chemistry cases, chemistry achievement, critical thinking, chemistry attitude

Procedia PDF Downloads 297
184 The Fragility of Sense: The Twofold Temporality of Embodiment and Its Role for Depression

Authors: Laura Bickel

Abstract:

This paper aims to investigate to what extent Merleau-Ponty’s philosophy of body memory serves as a viable resource for the enactive approach to cognitive science and its first-person experience-based research on ‘recurrent depressive disorder’ coded F33 in ICD-10. In pursuit of this goal, the analysis begins by revisiting the neuroreductive paradigm. This paradigm serves biological psychiatry to explain the condition of vital contact in terms of underlying neurophysiological mechanisms. It is demonstrated that the neuroreductive model cannot sufficiently account for the depressed person’s episodical withdrawal in causal terms. The analysis of the irregular loss of vital resonance requires integrating the body as the subject of experience and its phenomenological time. Then, it is shown that the enactive approach to depression as disordered sense-making is a promising alternative. The enactive model of perception implies that living beings do not register pre-existing meaning ‘out there’ but unfold ‘sense’ in their action-oriented response to the world. For the enactive approach, Husserl’s passive synthesis of inner time consciousness is fundamental for what becomes perceptually present for action. It seems intuitive to bring together the enactive approach to depression with the long-standing view in phenomenological psychopathology that explains the loss of vital contact by appealing to the disruption of the temporal structure of consciousness. However, this paper argues that the disruption of the temporal structure is not justified conceptually. Instead, one may integrate Merleau-Ponty’s concept of the past as the unconscious into the enactive approach to depression. From this perspective, the living being’s experiential and biological past inserts itself in the form of habit and bodily skills and ensures action-oriented responses to the environment. Finally, it is concluded that the depressed person’s withdrawal indicates the impairment of this application process. The person suffering from F33 cannot actualize sedimented meaning to respond to the valences and tasks of a given situation.

Keywords: depression, enactivism, neuroreductionsim, phenomenology, temporality

Procedia PDF Downloads 132
183 Fabrication of High-Aspect Ratio Vertical Silicon Nanowire Electrode Arrays for Brain-Machine Interfaces

Authors: Su Yin Chiam, Zhipeng Ding, Guang Yang, Danny Jian Hang Tng, Peiyi Song, Geok Ing Ng, Ken-Tye Yong, Qing Xin Zhang

Abstract:

Brain-machine interfaces (BMI) is a ground rich of exploration opportunities where manipulation of neural activity are used for interconnect with myriad form of external devices. These research and intensive development were evolved into various areas from medical field, gaming and entertainment industry till safety and security field. The technology were extended for neurological disorders therapy such as obsessive compulsive disorder and Parkinson’s disease by introducing current pulses to specific region of the brain. Nonetheless, the work to develop a real-time observing, recording and altering of neural signal brain-machine interfaces system will require a significant amount of effort to overcome the obstacles in improving this system without delay in response. To date, feature size of interface devices and the density of the electrode population remain as a limitation in achieving seamless performance on BMI. Currently, the size of the BMI devices is ranging from 10 to 100 microns in terms of electrodes’ diameters. Henceforth, to accommodate the single cell level precise monitoring, smaller and denser Nano-scaled nanowire electrode arrays are vital in fabrication. In this paper, we would like to showcase the fabrication of high aspect ratio of vertical silicon nanowire electrodes arrays using microelectromechanical system (MEMS) method. Nanofabrication of the nanowire electrodes involves in deep reactive ion etching, thermal oxide thinning, electron-beam lithography patterning, sputtering of metal targets and bottom anti-reflection coating (BARC) etch. Metallization on the nanowire electrode tip is a prominent process to optimize the nanowire electrical conductivity and this step remains a challenge during fabrication. Metal electrodes were lithographically defined and yet these metal contacts outline a size scale that is larger than nanometer-scale building blocks hence further limiting potential advantages. Therefore, we present an integrated contact solution that overcomes this size constraint through self-aligned Nickel silicidation process on the tip of vertical silicon nanowire electrodes. A 4 x 4 array of vertical silicon nanowires electrodes with the diameter of 290nm and height of 3µm has been successfully fabricated.

Keywords: brain-machine interfaces, microelectromechanical systems (MEMS), nanowire, nickel silicide

Procedia PDF Downloads 435
182 A Study on Conventional and Improved Tillage Practices for Sowing Paddy in Wheat Harvested Field

Authors: R. N. Pateriya, T. K. Bhattacharya

Abstract:

In India, rice-wheat cropping system occupies the major area and contributes about 40% of the country’s total food grain production. It is necessary that production of rice and wheat must keep pace with growing population. However, various factors such as degradation in natural resources, shift in cropping pattern, energy constraints etc. are causing reduction in the productivity of these crops. Seedbed for rice after wheat is difficult to prepare due to presence of straw and stubbles, and require excessive tillage operations to bring optimum tilth. In addition, delayed sowing and transplanting of rice is mainly due to poor crop residue management, multiplicity of tillage operations and non-availability of the power source. With increasing concern for fuel conservation and energy management, farmers might wish to estimate the best cultivation system for more productivity. The widest spread method of tilling land is ploughing with mould board plough. However, with the mould board plough upper layer of soil is neither always loosened at the desired extent nor proper mixing of different layers are achieved. Therefore, additional operations carried out to improve tilth. The farmers are becoming increasingly aware of the need for minimum tillage by minimizing the use of machines. Soil management can be achieved by using the combined active-passive tillage machines. A study was therefore, undertaken in wheat-harvested field to study the impact of conventional and modified tillage practices on paddy crop cultivation. Tillage treatments with tractor as a power source were selected during the experiment. The selected level of tillage treatments of tractor machinery management were (T1:- Direct Sowing of Rice), (T2:- 2 to 3 harrowing and no Puddling with manual transplanting), (T3:- 2 to 3 harrowing and Puddling with paddy harrow with manual transplanting), (T4:- 2 to 3 harrowing and Puddling with Rotavator with manual transplanting). The maximum output was obtained with treatment T1 (7.85 t/ha)) followed by T4 (6.4 t/ha), T3 (6.25 t/ha) and T2 (6.0 t/ha)) respectively.

Keywords: crop residues, cropping system, minimum tillage, yield

Procedia PDF Downloads 208
181 Enhancement of Cross-Linguistic Effect with the Increase in the Multilingual Proficiency during Early Childhood: A Case Study of English Language Acquisition by a Pre-School Child

Authors: Anupama Purohit

Abstract:

The paper is a study on the inevitable cross-linguistic effect found in the early multilingual learners. The cross-linguistic behaviour like code-mixing, code-switching, foreign accent, literal translation, redundancy and syntactic manipulation effected due to other languages on the English language output of a non-native pre-school child are discussed here. A case study method is adopted in this paper to support the claim of the title. A simultaneously tetra lingual pre-school child’s (within 1;3 to 4;0) language behaviour is analysed here. The sample output data of the child is gathered from the diary entries maintained by her family, regular observations and video recordings done since her birth. She is getting the input of her mother tongue, Sambalpuri, from her grandparents only; Hindi, the local language from her play-school and the neighbourhood; English only from her mother and occasional visit of other family friends; Odia only during the reading of the Odia story book. The child is exposed to code-mixing of all the languages throughout her childhood. But code-mixing, literal translation, redundancy and duplication were absent in her initial stage of multilingual acquisition. As the child was more proficient in English in comparison to her other first languages and had never heard code-mixing in English language; it was expected from her input pattern of English (one parent, English language) that she would maintain purity in her use of English while talking to the English language interlocutor. But with gradual increase in the language proficiency in each of the languages of the child, her handling of the multiple codes becomes deft cross-linguistically. It can be deduced from the case study that after attaining certain milestone proficiency in each language, the child’s linguistic faculty can operate at a metalinguistic level. The functional use of each morpheme, their arrangement in words and in the sentences, the supra segmental features, lexical-semantic mapping, culture specific use of a language and the pragmatic skills converge to give a typical childlike multilingual output in an intelligible manner to the multilingual people (with the same set of languages in combination). The result is appealing because for expressing the same ideas which the child used to speak (may be with grammatically wrong expressions) in one language, gradually, she starts showing cross-linguistic effect in her expressions. So the paper pleads for the separatist view from the very beginning of the holophrastic phase (as the child expresses in addressee-specific language); but development of a metalinguistic ability that helps the child in communicating in a sophisticated way according to the linguistic status of the addressee is unique to the multilingual child. This metalinguistic ability is independent of the mode if input of a multilingual child.

Keywords: code-mixing, cross-linguistic effect, early multilingualism, literal translation

Procedia PDF Downloads 299
180 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack

Authors: Varun Agarwal

Abstract:

Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.

Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images

Procedia PDF Downloads 130
179 Study on Control Techniques for Adaptive Impact Mitigation

Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty

Abstract:

Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.

Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber

Procedia PDF Downloads 90
178 Biodegradable Self-Supporting Nanofiber Membranes Prepared by Centrifugal Spinning

Authors: Milos Beran, Josef Drahorad, Ondrej Vltavsky, Martin Fronek, Jiri Sova

Abstract:

While most nanofibers are produced using electrospinning, this technique suffers from several drawbacks, such as the requirement for specialized equipment, high electrical potential, and electrically conductive targets. Consequently, recent years have seen the increasing emergence of novel strategies in generating nanofibers in a larger scale and higher throughput manner. The centrifugal spinning is simple, cheap and highly productive technology for nanofiber production. In principle, the drawing of solution filament into nanofibers using centrifugal spinning is achieved through the controlled manipulation of centrifugal force, viscoelasticity, and mass transfer characteristics of the spinning solutions. Engineering efforts of researches of the Food research institute Prague and the Czech Technical University in the field the centrifugal nozzleless spinning led to introduction of a pilot plant demonstrator NANOCENT. The main advantages of the demonstrator are lower investment cost - thanks to simpler construction compared to widely used electrospinning equipments, higher production speed, new application possibilities and easy maintenance. The centrifugal nozzleless spinning is especially suitable to produce submicron fibers from polymeric solutions in highly volatile solvents, such as chloroform, DCM, THF, or acetone. To date, submicron fibers have been prepared from PS, PUR and biodegradable polyesters, such as PHB, PLA, PCL, or PBS. The products are in form of 3D structures or nanofiber membranes. Unique self-supporting nanofiber membranes were prepared from the biodegradable polyesters in different mixtures. The nanofiber membranes have been tested for different applications. Filtration efficiencies for water solutions and aerosols in air were evaluated. Different active inserts were added to the solutions before the spinning process, such as inorganic nanoparticles, organic precursors of metal oxides, antimicrobial and wound healing compounds or photocatalytic phthalocyanines. Sintering can be subsequently carried out to remove the polymeric material and transfer the organic precursors to metal oxides, such as Si02, or photocatalytic Zn02 and Ti02, to obtain inorganic nanofibers. Electrospinning is more suitable technology to produce membranes for the filtration applications than the centrifugal nozzleless spinning, because of the formation of more homogenous nanofiber layers and fibers with smaller diameters. The self-supporting nanofiber membranes prepared from the biodegradable polyesters are especially suitable for medical applications, such as wound or burn healing dressings or tissue engineering scaffolds. This work was supported by the research grants TH03020466 of the Technology Agency of the Czech Republic.

Keywords: polymeric nanofibers, self-supporting nanofiber membranes, biodegradable polyesters, active inserts

Procedia PDF Downloads 165
177 Diminishing Constitutional Hyper-Rigidity by Means of Digital Technologies: A Case Study on E-Consultations in Canada

Authors: Amy Buckley

Abstract:

The purpose of this article is to assess the problem of constitutional hyper-rigidity to consider how it and the associated tensions with democratic constitutionalism can be diminished by means of using digital democratic technologies. In other words, this article examines how digital technologies can assist us in ensuring fidelity to the will of the constituent power without paying the price of hyper-rigidity. In doing so, it is impossible to ignore that digital strategies can also harm democracy through, for example, manipulation, hacking, ‘fake news,’ and the like. This article considers the tension between constitutional hyper-rigidity and democratic constitutionalism and the relevant strengths and weaknesses of digital democratic strategies before undertaking a case study on Canadian e-consultations and drawing its conclusions. This article observes democratic constitutionalism through the lens of the theory of deliberative democracy to suggest that the application of digital strategies can, notwithstanding their pitfalls, improve a constituency’s amendment culture and, thus, diminish constitutional hyper-rigidity. Constitutional hyper-rigidity is not a new or underexplored concept. At a high level, a constitution can be said to be ‘hyper-rigid’ when its formal amendment procedure is so difficult to enact that it does not take place or is limited in its application. This article claims that hyper-rigidity is one problem with ordinary constitutionalism that fails to satisfy the principled requirements of democratic constitutionalism. Given the rise and development of technology that has taken place since the Digital Revolution, there has been a significant expansion in the possibility for digital democratic strategies to overcome the democratic constitutionalism failures resulting from constitutional hyper-rigidity. Typically, these strategies have included, inter alia, e- consultations, e-voting systems, and online polling forums, all of which significantly improve the ability of politicians and judges to directly obtain the opinion of constituents on any number of matters. This article expands on the application of these strategies through its Canadian e-consultation case study and presents them as a solution to poor amendment culture and, consequently, constitutional hyper-rigidity. Hyper-rigidity is a common descriptor of many written and unwritten constitutions, including the United States, Australian, and Canadian constitutions as just some examples. This article undertakes a case study on Canada, in particular, as it is a jurisdiction less commonly cited in academic literature generally concerned with hyper-rigidity and because Canada has to some extent, championed the use of e-consultations. In Part I of this article, I identify the problem, being that the consequence of constitutional hyper-rigidity is in tension with the principles of democratic constitutionalism. In Part II, I identify and explore a potential solution, the implementation of digital democratic strategies as a means of reducing constitutional hyper-rigidity. In Part III, I explore Canada’s e-consultations as a case study for assessing whether digital democratic strategies do, in fact, improve a constituency’s amendment culture thus reducing constitutional hyper-rigidity and the associated tension that arises with the principles of democratic constitutionalism. The idea is to run a case study and then assess whether I can generalise the conclusions.

Keywords: constitutional hyper-rigidity, digital democracy, deliberative democracy, democratic constitutionalism

Procedia PDF Downloads 76
176 Application of Biomimetic Approach in Optimizing Buildings Heat Regulating System Using Parametric Design Tools to Achieve Thermal Comfort in Indoor Spaces in Hot Arid Regions

Authors: Aya M. H. Eissa, Ayman H. A. Mahmoud

Abstract:

When it comes to energy efficient thermal regulation system, natural systems do not only offer an inspirational source of innovative strategies but also sustainable and even regenerative ones. Using biomimetic design an energy efficient thermal regulation system can be developed. Although, conventional design process methods achieved fairly efficient systems, they still had limitations which can be overcome by using parametric design software. Accordingly, the main objective of this study is to apply and assess the efficiency of heat regulation strategies inspired from termite mounds in residential buildings’ thermal regulation system. Parametric design software is used to pave the way for further and more complex biomimetic design studies and implementations. A hot arid region is selected due to the deficiency of research in this climatic region. First, the analysis phase in which the stimuli, affecting, and the parameters, to be optimized, are set mimicking the natural system. Then, based on climatic data and using parametric design software Grasshopper, building form and openings height and areas are altered till settling on an optimized solution. Finally, an assessment of the efficiency of the optimized system, in comparison with a conventional system, is determined by firstly, indoors airflow and indoors temperature, by Ansys Fluent (CFD) simulation. Secondly by and total solar radiation falling on the building envelope, which was calculated using Ladybug, Grasshopper plugin. The results show an increase in the average indoor airflow speed from 0.5m/s to 1.5 m/s. Also, a slight decrease in temperature was noticed. And finally, the total radiation was decreased by 4%. In conclusion, despite the fact that applying a single bio-inspired heat regulation strategy might not be enough to achieve an optimum system, the concluded system is more energy efficient than the conventional ones as it aids achieving indoors comfort through passive techniques. Thus demonstrating the potential of parametric design software in biomimetic design.

Keywords: biomimicry, heat regulation systems, hot arid regions, parametric design, thermal comfort

Procedia PDF Downloads 294
175 Advertising Disability Index: A Content Analysis of Disability in Television Commercial Advertising from 2018

Authors: Joshua Loebner

Abstract:

Tectonic shifts within the advertising industry regularly and repeatedly present a deluge of data to be intuited across a spectrum of key performance indicators with innumerable interpretations where live campaigns are vivisected to pivot towards coalescence amongst a digital diaspora. But within this amalgam of analytics, validation, and creative campaign manipulation, where do diversity and disability inclusion fit in? In 2018 several major brands were able to answer this question definitely and directly by incorporating people with disabilities into advertisements. Disability inclusion, representation, and portrayals are documented annually across a number of different media, from film to primetime television, but ongoing studies centering on advertising have not been conducted. Symbols and semiotics in advertising often focus on a brand’s features and benefits, but this analysis on advertising and disability shows, how in 2018, creative campaigns and the disability community came together with the goal to continue the momentum and spark conversations. More brands are welcoming inclusion and sharing positive portrayals of intersectional diversity and disability. Within the analysis and surrounding scholarship, a multipoint analysis of each advertisement and meta-interpretation of the research has been conducted to provide data, clarity, and contextualization of insights. This research presents an advertising disability index that can be monitored for trends and shifts in future studies and to provide further comparisons and contrasts of advertisements. An overview of the increasing buying power within the disability community and population changes among this group anchors the significance and size of the minority in the US. When possible, viewpoints from creative teams and advertisers that developed the ads are brought into the research to further establish understanding, meaning, and individuals’ purposeful approaches towards disability inclusion. Finally, the conclusion and discussion present key takeaways to learn from the research, build advocacy and action both within advertising scholarship and the profession. This study, developed into an advertising disability index, will answer questions of how people with disabilities are represented in each ad. In advertising that includes disability, there is a creative pendulum. At one extreme, among many other negative interpretations, people with disables are portrayed in a way that conveys pity, fosters ableism and discrimination, and shows that people with disabilities are less than normal from a societal and cultural perspective. At the other extreme, people with disabilities are portrayed with a type of undue inspiration, considered inspiration porn, or superhuman, otherwise known as supercrip, and in ways that most people with disabilities could never achieve, or don’t want to be seen for. While some ads reflect both extremes, others stood out for non-polarizing inclusion of people with disabilities. This content analysis explores television commercial advertisements to determine the presence of people with disabilities and any other associated disability themes and/or concepts. Content analysis will allow for measuring the presence and interpretation of disability portrayals in each ad.

Keywords: advertising, brand, disability, marketing

Procedia PDF Downloads 115
174 Naked Machismo: Uncovered Masculinity in an Israeli Home Design Campaign

Authors: Gilad Padva, Sigal Barak Brandes

Abstract:

This research centers on an unexpected Israeli advertising campaign for Elemento, a local furniture company, which eroticizes male nudity. The discussed campaign includes a series of printed ads that depict naked male models in effeminate positions. This campaign included a series of ads published in Haaretz, a small-scaled yet highly prestigious daily newspaper which is typically read by urban middle-upper-class left-winged Israelis. Apparently, this campaign embodies an alternative masculinity that challenges the prevalent machismo in Israeli society and advertising. Although some of the ads focus on young men in effeminate positions, they never expose their genitals and anuses, and their bodies are never permeable. The 2010s Elemento male models are seemingly contrasted to conventional representation of manhood in contemporary mainstream advertising. They display a somewhat inactive, passive and self-indulgent masculinity which involves 'conspicuous leisure'. In the process of commodity fetishism, the advertised furniture are emptied of the original meaning of their production, and then filled with new meanings in ways that both mystify the product and turn it into a fetish object. Yet, our research critically reconsiders this sensational campaign as sophisticated patriarchal parody that does not subvert but rather reconfirms and even fetishizes patriarchal premises; it parodizes effeminacy rather than the prevalent (Israeli) machismo. Following Pierre Bourdieu's politics of cultural taste, our research reconsiders and criticizes the male models' domesticated masculinity in a fantasized and cosmopolitan hedonistic habitus. Notwithstanding, we suggest that the Elemento campaign, despite its conformity, does question some Israeli and global axioms about gender roles, corporeal ideologies, idealized bodies, and domesticated phalluses and anuses. Although the naked truth is uncovered by this campaign, it does erect a vibrant discussion of contemporary masculinities and their exploitation in current mass consumption.

Keywords: male body, campaign, advertising, gender studies, men's studies, Israeli culture, masculinity, parody, effeminacy

Procedia PDF Downloads 211
173 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 453
172 Working Memory and Phonological Short-Term Memory in the Acquisition of Academic Formulaic Language

Authors: Zhicheng Han

Abstract:

This study examines the correlation between knowledge of formulaic language, working memory (WM), and phonological short-term memory (PSTM) in Chinese L2 learners of English. This study investigates if WM and PSTM correlate differently to the acquisition of formulaic language, which may be relevant for the discourse around the conceptualization of formulas. Connectionist approaches have lead scholars to argue that formulas are form-meaning connections stored whole, making PSTM significant in the acquisitional process as it pertains to the storage and retrieval of chunk information. Generativist scholars, on the other hand, argued for active participation of interlanguage grammar in the acquisition and use of formulaic language, where formulas are represented in the mind but retain the internal structure built around a lexical core. This would make WM, especially the processing component of WM an important cognitive factor since it plays a role in processing and holding information for further analysis and manipulation. The current study asked L1 Chinese learners of English enrolled in graduate programs in China to complete a preference raking task where they rank their preference for formulas, grammatical non-formulaic expressions, and ungrammatical phrases with and without the lexical core in academic contexts. Participants were asked to rank the options in order of the likeliness of them encountering these phrases in the test sentences within academic contexts. Participants’ syntactic proficiency is controlled with a cloze test and grammar test. Regression analysis found a significant relationship between the processing component of WM and preference of formulaic expressions in the preference ranking task while no significant correlation is found for PSTM or syntactic proficiency. The correlational analysis found that WM, PSTM, and the two proficiency test scores have significant covariates. However, WM and PSTM have different predictor values for participants’ preference for formulaic language. Both storage and processing components of WM are significantly correlated with the preference for formulaic expressions while PSTM is not. These findings are in favor of the role of interlanguage grammar and syntactic knowledge in the acquisition of formulaic expressions. The differing effects of WM and PSTM suggest that selective attention to and processing of the input beyond simple retention play a key role in successfully acquiring formulaic language. Similar correlational patterns were found for preferring the ungrammatical phrase with the lexical core of the formula over the ones without the lexical core, attesting to learners’ awareness of the lexical core around which formulas are constructed. These findings support the view that formulaic phrases retain internal syntactic structures that are recognized and processed by the learners.

Keywords: formulaic language, working memory, phonological short-term memory, academic language

Procedia PDF Downloads 62
171 Response Regimes and Vibration Mitigation in Equivalent Mechanical Model of Strongly Nonlinear Liquid Sloshing

Authors: Maor Farid, Oleg Gendelman

Abstract:

Equivalent mechanical model of liquid sloshing in partially-filled cylindrical vessel is treated in the cases of free oscillations and of horizontal base excitation. The model is designed to cover both the linear and essentially nonlinear sloshing regimes. The latter fluid behaviour might involve hydraulic impacts interacting with the inner walls of the tank. These impulsive interactions are often modeled by high-power potential and dissipation functions. For the sake of analytical description, we use the traditional approach by modeling the impacts with velocity-dependent restitution coefficient. This modelling is similar to vibro-impact nonlinear energy sink (VI NES) which was recently explored for its vibration mitigation performances and nonlinear response regimes. Steady-state periodic regimes and chaotic strongly modulated responses (CSMR) are detected. Those dynamical regimes were described by the system's slow motion on the slow invariant manifold (SIM). There is a good agreement between the analytical results and numerical simulations. Subsequently, Finite-Element (FE) method is used to determine and verify the model parameters and to identify dominant dynamical regimes, natural modes and frequencies. The tank failure modes are identified and critical locations are identified. Mathematical relation is found between degrees-of-freedom (DOFs) motion and the mechanical stress applied in the tank critical section. This is the prior attempt to take under consideration large-amplitude nonlinear sloshing and tank structure elasticity effects for design, regulation definition and resistance analysis purposes. Both linear (tuned mass damper, TMD) and nonlinear (nonlinear energy sink, NES) passive energy absorbers contribution to the overall system mitigation is firstly examined, in terms of both stress reduction and time for vibration decay.

Keywords: nonlinear energy sink (NES), reduced-order modelling, liquid sloshing, vibration mitigation, vibro-impact dynamics

Procedia PDF Downloads 145
170 Successful Rehabilitation of Recalcitrant Knee Pain Due to Anterior Cruciate Ligament Injury Masked by Extensive Skin Graft: A Case Report

Authors: Geum Yeon Sim, Tyler Pigott, Julio Vasquez

Abstract:

A 38-year-old obese female with no apparent past medical history presented with left knee pain. Six months ago, she sustained a left knee dislocation in a motor vehicle accident that was managed with a skin graft over the left lower extremity without any reconstructive surgery. She developed persistent pain and stiffness in her left knee that worsened with walking and stair climbing. Examination revealed healed extensive skin graft over the left lower extremity, including the left knee. Palpation showed moderate tenderness along the superior border of the patella, exquisite tenderness over MCL, and mild tenderness on the tibial tuberosity. There was normal sensation, reflexes, and strength in her lower extremities. There was limited active and passive range of motion of her left knee during flexion. There was instability noted upon the valgus stress test of the left knee. Left knee magnetic resonance imaging showed high-grade (grade 2-3) injury of the proximal superficial fibers of the MCL and diffuse thickening and signal abnormality of the cruciate ligaments, as well as edema-like subchondral marrow signal change in the anterolateral aspect of the lateral femoral condyle weight-bearing surface. There was also notable extensive scarring and edema of the skin, subcutaneous soft tissues, and musculature surrounding the knee. The patient was managed with left knee immobilization for five months, which was complicated by limited knee flexion. Physical therapy consisting of quadriceps, hamstrings, gastrocnemius stretching and strengthening, range of motion exercises, scar/soft tissue mobilization, and gait training was given with marked improvement in pain and range of motion. The patient experienced a further reduction in pain as well as an improvement in function with home exercises consisting of continued strengthening and stretching.

Keywords: ligamentous injury, trauma, rehabilitation, knee pain

Procedia PDF Downloads 108
169 The Confluence between Autism Spectrum Disorder and the Schizoid Personality

Authors: Murray David Schane

Abstract:

Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.

Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions

Procedia PDF Downloads 114
168 Input and Interaction as Training for Cognitive Learning: Variation Sets Influence the Sudden Acquisition of Periphrastic estar 'to be' + verb + -ndo*

Authors: Mary Rosa Espinosa-Ochoa

Abstract:

Some constructions appear suddenly in children’s speech and are productive from the beginning. These constructions are supported by others, previously acquired, with which they share semantic and pragmatic features. Thus, for example, the acquisition of the passive voice in German is supported by other constructions with which it shares the lexical verb sein (“to be”). This also occurs in Spanish, in the acquisition of the progressive aspectual periphrasis estar (“to be”) + verb root + -ndo (present participle), supported by locative constructions acquired earlier with the same verb. The periphrasis shares with the locative constructions not only the lexical verb estar, but also pragmatic relations. Both constructions can be used to answer the question ¿Dónde está? (“Where is he/she/it?”), whose answer could be either Está aquí (“He/she/it is here”) or Se está bañando (“He/she/it is taking a bath”).This study is a corpus-based analysis of two children (1;08-2;08) and the input directed to them: it proposes that the pragmatic and semantic support from previously-acquired constructions comes from the input, during interaction with others. This hypothesis is based on analysis of constructions with estar, whose use to express temporal change (which differentiates it from its counterpart ser [“to be”]), is given in variation sets, similar to those described by Küntay and Slobin (2002), that allow the child to perceive the change of place experienced by nouns that function as its grammatical subject. For example, at different points during a bath, the mother says: El jabón está aquí “The soap is here” (beginning of bath); five minutes later, the soap has moved, and the mother says el jabón está ahí “the soap is there”; the soap moves again later on and she says: el jabón está abajo de ti “the soap is under you”. “The soap” is the grammatical subject of all of these utterances. The Spanish verb + -ndo is a progressive phase aspect encoder of a dynamic state that generates a token. The verb + -ndo is also combined with verb estar to encode. It is proposed here that the phases experienced in interaction with the adult, in events related to the verb estar, allow a child to generate this dynamicity and token reading of the verb + -ndo. In this way, children begin to produce the periphrasis suddenly and productively, even though neither the periphrasis nor the verb + -ndo itself are frequent in adult speech.

Keywords: child language acquisition, input, variation sets, Spanish language

Procedia PDF Downloads 149
167 The Use of a Novel Visual Kinetic Demonstration Technique in Student Skill Acquisition of the Sellick Cricoid Force Manoeuvre

Authors: L. Nathaniel-Wurie

Abstract:

The Sellick manoeuvre a.k.a the application of cricoid force (CF), was first described by Brian Sellick in 1961. CF is the application of digital pressure against the cricoid cartilage with the intention of posterior force causing oesophageal compression against the vertebrae. This is designed to prevent passive regurgitation of gastric contents, which is a major cause of morbidity and mortality during emergency airway management inside and outside of the hospital. To the authors knowledge, there is no universally standardised training modality and, therefore, no reliable way to examine if there are appropriate outcomes. If force is not measured during training, how can one surmise that appropriate, accurate, or precise amounts of force are being used routinely. Poor homogeneity in teaching and untested outcomes will correlate with reduced efficacy and increased adverse effects. For this study, the accuracy of force delivery in trained professionals was tested, and outcomes contrasted against a novice control and a novice study group. In this study, 20 operating department practitioners were tested (with a mean experience of 5.3years of performing CF). Subsequent contrast with 40 novice students who were randomised into one of two arms. ‘Arm A’ were explained the procedure, then shown the procedure then asked to perform CF with the corresponding force measurement being taken three times. Arm B had the same process as arm A then before being tested, they had 10, and 30 Newtons applied to their hands to increase intuitive understanding of what the required force equated to, then were asked to apply the equivalent amount of force against a visible force metre and asked to hold that force for 20 seconds which allowed direct visualisation and correction of any over or under estimation. Following this, Arm B were then asked to perform the manoeuvre, and the force generated measured three times. This study shows that there is a wide distribution of force produced by trained professionals and novices performing the procedure for the first time. Our methodology for teaching the manoeuvre shows an improved accuracy, precision, and homogeneity within the group when compared to novices and even outperforms trained practitioners. In conclusion, if this methodology is adopted, it may correlate with higher clinical outcomes, less adverse events, and more successful airway management in critical medical scenarios.

Keywords: airway, cricoid, medical education, sellick

Procedia PDF Downloads 79
166 Remote Sensing Reversion of Water Depths and Water Management for Waterbird Habitats: A Case Study on the Stopover Site of Siberian Cranes at Momoge, China

Authors: Chunyue Liu, Hongxing Jiang

Abstract:

Traditional water depth survey of wetland habitats used by waterbirds needs intensive labor, time and money. The optical remote sensing image relies on passive multispectral scanner data has been widely employed to study estimate water depth. This paper presents an innovative method for developing the water depth model based on the characteristics of visible and thermal infrared spectra of Landsat ETM+ image, combing with 441 field water depth data at Etoupao shallow wetland. The wetland is located at Momoge National Nature Reserve of Northeast China, where the largest stopover habitat along the eastern flyway of globally, critically-endangered Siberian Cranes are. The cranes mainly feed on the tubers of emergent aquatic plants such as Scirpus planiculmis and S. nipponicus. The effective water control is a critical step for maintaining the production of tubers and food availability for this crane. The model employing multi-band approach can effectively simulate water depth for this shallow wetland. The model parameters of NDVI and GREEN indicated the vegetation growth and coverage affecting the reflectance from water column change are uneven. Combining with the field-observed water level at the same date of image acquisition, the digital elevation model (DEM) for the underwater terrain was generated. The wetland area and water volume of different water levels were then calculated from the DEM using the function of Area and Volume Statistics under the 3D Analyst of ArcGIS 10.0. The findings provide good references to effectively monitor changes in water level and water demand, develop practical plan for water level regulation and water management, and to create best foraging habitats for the cranes. The methods here can be adopted for the bottom topography simulation and water management in waterbirds’ habitats, especially in the shallow wetlands.

Keywords: remote sensing, water depth reversion, shallow wetland habitat management, siberian crane

Procedia PDF Downloads 252
165 Effect of Using PCMs and Transparency Rations on Energy Efficiency and Thermal Performance of Buildings in Hot Climatic Regions. A Simulation-Based Evaluation

Authors: Eda K. Murathan, Gulten Manioglu

Abstract:

In the building design process, reducing heating and cooling energy consumption according to the climatic region conditions of the building are important issues to be considered in order to provide thermal comfort conditions in the indoor environment. Applying a phase-change material (PCM) on the surface of a building envelope is the new approach for controlling heat transfer through the building envelope during the year. The transparency ratios of the window are also the determinants of the amount of solar radiation gain in the space, thus thermal comfort and energy expenditure. In this study, a simulation-based evaluation was carried out by using Energyplus to determine the effect of coupling PCM and transparency ratio when integrated into the building envelope. A three-storey building, a 30m x 30m sized floor area and 10m x 10m sized courtyard are taken as an example of the courtyard building model, which is frequently seen in the traditional architecture of hot climatic regions. 8 zones (10m x10m sized) with 2 exterior façades oriented in different directions on each floor were obtained. The percentage of transparent components on the PCM applied surface was increased at every step (%30, %40, %50). For every zone differently oriented, annual heating, cooling energy consumptions, and thermal comfort based on the Fanger method were calculated. All calculations are made for the zones of the intermediate floor of the building. The study was carried out for Diyarbakır provinces representing the hot-dry climate region and Antalya representing the hot-humid climate region. The increase in the transparency ratio has led to a decrease in heating energy consumption but an increase in cooling energy consumption for both provinces. When PCM is applied to all developed options, It was observed that heating and cooling energy consumption decreased in both Antalya (6.06%-19.78% and %1-%3.74) and Diyarbakır (2.79%-3.43% and 2.32%-4.64%) respectively. When the considered building is evaluated under passive conditions for the 21st of July, which represents the hottest day of the year, it is seen that the user feels comfortable between 11 pm-10 am with the effect of night ventilation for both provinces.

Keywords: building envelope, heating and cooling energy consumptions, phase change material, transparency ratio

Procedia PDF Downloads 176
164 Lifelong Learning in Applied Fields (LLAF) Tempus Funded Project: A Case Study of Problem-Based Learning

Authors: Nirit Raichel, Dorit Alt

Abstract:

Although university teaching is claimed to have a special task to support students in adopting ways of thinking and producing new knowledge anchored in scientific inquiry practices, it is argued that students' habits of learning are still overwhelmingly skewed toward passive acquisition of knowledge from authority sources rather than from collaborative inquiry activities. In order to overcome this critical inadequacy between current educational goals and instructional methods, the LLAF consortium is aimed at developing updated instructional practices that put a premium on adaptability to the emerging requirements of present society. LLAF has created a practical guide for teachers containing updated pedagogical strategies based on the constructivist approach for learning, arranged along Delors’ four theoretical ‘pillars’ of education: Learning to know, learning to do, learning to live together, and learning to be. This presentation will be limited to problem-based learning (PBL), as a strategy introduced in the second pillar. PBL leads not only to the acquisition of technical skills, but also allows the development of skills like problem analysis and solving, critical thinking, cooperation and teamwork, decision- making and self-regulation that can be transferred to other contexts. This educational strategy will be exemplified by a case study conducted in the pre-piloting stage of the project. The case describes a three-fold process implemented in a postgraduate course for in-service teachers, including: (1) learning about PBL (2) implementing PBL in the participants' classes, and (3) qualitatively assessing the contributions of PBL to students' outcomes. An example will be given regarding the ways by which PBL was applied and assessed in civic education for high-school students. Two 9th-grade classes have participated the study; both included several students with learning disability. PBL was applied only in one class whereas traditional instruction was used in the other. Results showed a robust contribution of PBL to students' affective and cognitive outcomes as reflected in their motivation to engage in learning activities, and to further explore the subject. However, students with learning disability were less favorable with this "active" and "annoying" environment. Implications of these findings for the LLAF project will be discussed.

Keywords: problem-based learning, higher education, pedagogical strategies

Procedia PDF Downloads 334