Search results for: maximum force
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6051

Search results for: maximum force

411 Antimicrobial Activity of 2-Nitro-1-Propanol and Lauric Acid against Gram-Positive Bacteria

Authors: Robin Anderson, Elizabeth Latham, David Nisbet

Abstract:

Propagation and dissemination of antimicrobial resistant and pathogenic microbes from spoiled silages and composts represents a serious public health threat to humans and animals. In the present study, the antimicrobial activity of the short chain nitro-compound, 2-nitro-1-propanol (9 mM) as well as the medium chain fatty acid, lauric acid, and its glycerol monoester, monolaurin, (each at 25 and 17 µmol/mL, respectfully) were investigated against select pathogenic and multi-drug resistant antimicrobial resistant Gram-positive bacteria common to spoiled silages and composts. In an initial study, we found that growth rates of a multi-resistant Enterococcus faecalis (expressing resistance against erythromycin, quinupristin/dalfopristin and tetracycline) and Staphylococcus aureus strain 12600 (expressing resistance against erythromycin, linezolid, penicillin, quinupristin/dalfopristin and vancomycin) were more than 78% slower (P < 0.05) by 2-nitro-1-propanol treatment during culture (n = 3/treatment) in anaerobically prepared ½ strength Brain Heart Infusion broth at 37oC when compared to untreated controls (0.332 ± 0.04 and 0.108 ± 0.03 h-1, respectively). The growth rate of 2-nitro-1-propanol-treated Listeria monocytogenes was also decreased by 96% (P < 0.05) when compared to untreated controls cultured similarly (0.171 ± 0.01 h-1). Maximum optical densities measured at 600 nm were lower (P < 0.05) in 2-nitro-1-propanol-treated cultures (0.053 ± 0.01, 0.205 ± 0.02 and 0.041 ± 0.01, respectively) than in untreated controls (0.483 ± 0.02, 0.523 ± 0.01 and 0.427 ± 0.01, respectively) for E. faecalis, S. aureus and L. monocytogenes, respectively. When tested against mixed microbial populations during anaerobic 24 h incubation of spoiled silage, significant effects of treatment with 1 mg 2-nitro-1-propanol (approximately 9.5 µmol/g) or 5 mg lauric acid/g (approximately 25 µmol/g) on populations of wildtype Enterococcus and Listeria were not observed. Mixed populations treated with 5 mg monolaurin/g (approximately 17 µmol/g) had lower (P < 0.05) viable cell counts of wildtype enterococci than untreated controls after 6 h incubation (2.87 ± 1.03 versus 5.20 ± 0.25 log10 colony forming units/g, respectively) but otherwise significant effects of monolaurin were not observed. These results reveal differential susceptibility of multi-drug resistant enterococci and staphylococci as well as L. monocytogenes to the inhibitory activity of 2-nitro-1-propanol and the medium chain fatty acid, lauric acid and its glycerol monoester, monolaurin. Ultimately, these results may lead to improved treatment technologies to preserve the microbiological safety of silages and composts.

Keywords: 2-nitro-1-propanol, lauric acid, monolaurin, gram positive bacteria

Procedia PDF Downloads 87
410 Assessment of Physical Activity Patterns in Patients with Cardiopulmonary Diseases

Authors: Ledi Neçaj

Abstract:

Objectives: The target of this paper is (1) to explain objectively physical activity model throughout three chronic cardiopulmonary conditions, and (2) to study the connection among physical activity dimensions with disease severity, self-reported physical and emotional functioning, and exercise performance. Material and Methods: This is a cross-sectional study of patients in their domestic environment. Patients with cardiopulmonary diseases were: chronic obstructive pulmonary disease (COPD), (n-63), coronary heart failure (n=60), and patients with implantable cardioverter defibrillator (n=60). Main results measures: Seven ambulatory physical activity dimensions (total steps, percentage time active, percentage time ambulating at low, medium, and hard intensity, maximum cadence for 30 non-stop minutes, and peak performance) have been measured with an accelerometer. Results: Subjects with COPD had the lowest amount of ambulatory physical activity compared with topics with coronary heart failure and cardiac dysrhythmias (all 7 interest dimensions, P<.05); total step counts have been: 5319 as opposed to 7464 as opposed to 9570, respectively. Six-minute walk distance becomes correlated (r=.44-.65, P<.01) with all physical activity dimensions inside the COPD pattern, the most powerful correlations being with total steps and peak performance. In topics with cardiac impairment, maximal oxygen intake had the most effective small to slight correlations with five of the physical activity dimensions (r=.22-.40, P<.05). In contrast, correlations among 6-minute walk test distance and physical activity have been higher (r=.48-.61, P<.01) albeit in a smaller pattern of most effective patients with coronary heart failure. For all three samples, self-reported physical and mental health functioning, age, frame mass index, airflow obstruction, and ejection fraction had both exceptionally small and no significant correlations with physical activity. Conclusions: Findings from this study present a profitable benchmark of physical activity patterns in individuals with cardiopulmonary diseases for comparison with future studies. All seven dimensions of ambulatory physical activity have disfavor between subjects with COPD, heart failure, and cardiac dysrhythmias. Depending on the research or clinical goal, the use of one dimension, such as total steps, may be sufficient. Although physical activity had high correlations with performance on a six-minute walk test relative to other variables, accelerometers-based physical activity monitoring provides unique, important information about real-world behavior in patients with cardiopulmonary not already captured with existing measures.

Keywords: ambulatory physical activity, walking, monitoring, COPD, heart failure, implantable defibrillator, exercise performance

Procedia PDF Downloads 69
409 Bioefficacy of Ocimum sanctum on Reproductive Performance of Red Cotton Bug, Dysdercus koenigii (Heteroptera: Pyrrhocoriedae)

Authors: Kamal Kumar Gupta, Sunil Kayesth

Abstract:

Dysdercus koenigii is serious pest of cotton and other malvaceous crop. Present research work aimed at ecofriendly approach for management of pest by plant extracts. The impact of Ocimum sanctum was studied on reproductive performance of Dysdercus koenigii. The hexane extract of Ocimum leaves was prepared by ‘cold extraction method’. The newly emerged fifth instar nymphs were exposed to the extract of concentrations ranging from 0.1% to 0.00625% by ‘thin film residual method’ for a period of 24h. Reproductive fitness of the adults emerged from the treated nymphs was evaluated by assessing their courtship behaviour, oviposition behaviour, and fertility. The studies indicated that treatment of Dysdercus with the hexane extract of Ocimum altered their courtship behaviour. Consequently, the treated males exhibited less sexual activity, performed fewer mounting attempts, increased time to mate and showed decreased percent successful mating. The females often rejected courting treated male by shaking the abdomen. Similarly, the treated females in many cases remained non-receptive to the courting male. Premature termination of mating in the mating pairs prior to insemination further decreased the mating success of the treated adults. Maximum abbreviation of courtship behaviour was observed in the experimental set up where both the males and the females were treated. Only females which mate successfully were observed for study of oviposition behaviour. The treated females laid lesser number of egg batches and eggs in their life span. The eggs laid by these females were fertile indicating insemination of the female. However, percent hatchability was lesser than control. The effects of hexane extract were dose dependent. Treatment with 0.1% and 0.05% extract altered courtship behaviour. Doses of concentrations less than 0.05% did not affect courtship behaviour but altered the oviposition behaviour and fertility. Significant reduction in the fecundity and fertility was observed in the treatments at concentration as low as 0.00625%. The GCMS analysis of the extract revealed a plethora of phytochemicals including juvenile hormone mimics, and the intermediates of juvenile hormone biosynthesis. Therefore, some of these compounds individually or synergistically impair reproductive behaviour of Dysdercus. Alteration of courtship behaviour and suppression of fecundity and fertility with the help of plant extracts has wide potentials in suppression of pest population and ‘integrated pest management’.

Keywords: courtship behaviour, Dysdercus koenigii, Ocimum sanctum, oviposition behaviour

Procedia PDF Downloads 238
408 Physiological Effects during Aerobatic Flights on Science Astronaut Candidates

Authors: Pedro Llanos, Diego García

Abstract:

Spaceflight is considered the last frontier in terms of science, technology, and engineering. But it is also the next frontier in terms of human physiology and performance. After more than 200,000 years humans have evolved under earth’s gravity and atmospheric conditions, spaceflight poses environmental stresses for which human physiology is not adapted. Hypoxia, accelerations, and radiation are among such stressors, our research involves suborbital flights aiming to develop effective countermeasures in order to assure sustainable human space presence. The physiologic baseline of spaceflight participants is subject to great variability driven by age, gender, fitness, and metabolic reserve. The objective of the present study is to characterize different physiologic variables in a population of STEM practitioners during an aerobatic flight. Cardiovascular and pulmonary responses were determined in Science Astronaut Candidates (SACs) during unusual attitude aerobatic flight indoctrination. Physiologic data recordings from 20 subjects participating in high-G flight training were analyzed. These recordings were registered by wearable sensor-vest that monitored electrocardiographic tracings (ECGs), signs of dysrhythmias or other electric disturbances during all the flight. The same cardiovascular parameters were also collected approximately 10 min pre-flight, during each high-G/unusual attitude maneuver and 10 min after the flights. The ratio (pre-flight/in-flight/post-flight) of the cardiovascular responses was calculated for comparison of inter-individual differences. The resulting tracings depicting the cardiovascular responses of the subjects were compared against the G-loads (Gs) during the aerobatic flights to analyze cardiovascular variability aspects and fluid/pressure shifts due to the high Gs. In-flight ECG revealed cardiac variability patterns associated with rapid Gs onset in terms of reduced heart rate (HR) and some scattered dysrhythmic patterns (15% premature ventricular contractions-type) that were considered as triggered physiological responses to high-G/unusual attitude training and some were considered as instrument artifact. Variation events were observed in subjects during the +Gz and –Gz maneuvers and these may be due to preload and afterload, sudden shift. Our data reveal that aerobatic flight influenced the breathing rate of the subject, due in part by the various levels of energy expenditure due to the increased use of muscle work during these aerobatic maneuvers. Noteworthy was the high heterogeneity in the different physiological responses among a relatively small group of SACs exposed to similar aerobatic flights with similar Gs exposures. The cardiovascular responses clearly demonstrated that SACs were subjected to significant flight stress. Routine ECG monitoring during high-G/unusual attitude flight training is recommended to capture pathology underlying dangerous dysrhythmias in suborbital flight safety. More research is currently being conducted to further facilitate the development of robust medical screening, medical risk assessment approaches, and suborbital flight training in the context of the evolving commercial human suborbital spaceflight industry. A more mature and integrative medical assessment method is required to understand the physiology state and response variability among highly diverse populations of prospective suborbital flight participants.

Keywords: g force, aerobatic maneuvers, suborbital flight, hypoxia, commercial astronauts

Procedia PDF Downloads 104
407 Assessment of Agricultural Intervention on Ecosystem Services in the Central-South Zone of Chile

Authors: Steven Hidalgo, Patricio Neumann

Abstract:

The growth of societies has increased the consumption of raw materials and food obtained from nature. This has influenced the services offered by ecosystems to humans, mainly supply and regulation services. One of the indicators used to evaluate these services is Net Primary Productivity (NPP), which is understood as the energy stored in the form of biomass by primary organisms through the process of photosynthesis and respiration. The variation of NPP by defined area produces changes in the properties of terrestrial and aquatic ecosystems, which alter factors such as biodiversity, nutrient cycling, carbon storage and water quality. The analysis of NPP to evaluate variations in ecosystem services includes harvested NPP (understood as provisioning services), which is the raw material from agricultural systems used by humans as a source of energy and food, and the remaining NPP (expressed as a regulating service) or the amount of biomass that remains in ecosystems after the harvesting process, which is mainly related to factors such as biodiversity. Given that agriculture is a fundamental pillar of Chile's integral development, the purpose of this study is to evaluate provisioning and regulating ecosystem services in the agricultural sector, specifically in cereal production, in the communes of the central-southern regions of Chile through a conceptual framework based on the quantification of the fraction of Human Appropriation of Net Primary Productivity (HANPP) and the fraction remaining in the ecosystems (NPP remaining). A total of 161 communes were analyzed in the regions of O'Higgins, Maule, Ñuble, Bio-Bío, La Araucanía and Los Lagos, which are characterized by having the largest areas planted with cereals. It was observed that the region of La Araucanía produces the greatest amount of dry matter, understood as provisioning service, where Victoria is the commune with the highest cereal production in the country. In addition, the maximum value of HANPP was in the O'Higgins region, highlighting the communes of Coltauco, Quinta de Tilcoco, Placilla and Rengo. On the other hand, the communes of Futrono, Pinto, Lago Ranco and Pemuco, whose cereal production was important during the study, had the highest values of remaining NPP as a regulating service. Finally, an inverse correlation was observed between the provisioning and regulating ecosystem services, i.e., the higher the cereal or dry matter production in a defined area, the lower the net primary production remaining in the ecosystems. Based on this study, future research will focus on the evaluation of ecosystem services associated with other crops, such as forestry plantations, whose activity is an important part of the country's productive sector.

Keywords: provisioning services, regulating services, net primary productivity, agriculture

Procedia PDF Downloads 77
406 Widely Diversified Macroeconomies in the Super-Long Run Casts a Doubt on Path-Independent Equilibrium Growth Model

Authors: Ichiro Takahashi

Abstract:

One of the major assumptions of mainstream macroeconomics is the path independence of capital stock. This paper challenges this assumption by employing an agent-based approach. The simulation results showed the existence of multiple "quasi-steady state" equilibria of the capital stock, which may cast serious doubt on the validity of the assumption. The finding would give a better understanding of many phenomena that involve hysteresis, including the causes of poverty. The "market-clearing view" has been widely shared among major schools of macroeconomics. They understand that the capital stock, the labor force, and technology, determine the "full-employment" equilibrium growth path and demand/supply shocks can move the economy away from the path only temporarily: the dichotomy between the short-run business cycles and the long-run equilibrium path. The view then implicitly assumes the long-run capital stock to be independent of how the economy has evolved. In contrast, "Old Keynesians" have recognized fluctuations in output as arising largely from fluctuations in real aggregate demand. It will then be an interesting question to ask if an agent-based macroeconomic model, which is known to have path dependence, can generate multiple full-employment equilibrium trajectories of the capital stock in the super-long run. If the answer is yes, the equilibrium level of capital stock, an important supply-side factor, would no longer be independent of the business cycle phenomenon. This paper attempts to answer the above question by using the agent-based macroeconomic model developed by Takahashi and Okada (2010). The model would serve this purpose well because it has neither population growth nor technology progress. The objective of the paper is twofold: (1) to explore the causes of long-term business cycle, and (2) to examine the super-long behaviors of the capital stock of full-employment economies. (1) The simulated behaviors of the key macroeconomic variables such as output, employment, real wages showed widely diversified macro-economies. They were often remarkably stable but exhibited both short-term and long-term fluctuations. The long-term fluctuations occur through the following two adjustments: the quantity and relative cost adjustments of capital stock. The first one is obvious and assumed by many business cycle theorists. The reduced aggregate demand lowers prices, which raises real wages, thereby decreasing the relative cost of capital stock with respect to labor. (2) The long-term business cycles/fluctuations were synthesized with the hysteresis of real wages, interest rates, and investments. In particular, a sequence of the simulation runs with a super-long simulation period generated a wide range of perfectly stable paths, many of which achieved full employment: all the macroeconomic trajectories, including capital stock, output, and employment, were perfectly horizontal over 100,000 periods. Moreover, the full-employment level of capital stock was influenced by the history of unemployment, which was itself path-dependent. Thus, an experience of severe unemployment in the past kept the real wage low, which discouraged a relatively costly investment in capital stock. Meanwhile, a history of good performance sometimes brought about a low capital stock due to a high-interest rate that was consistent with a strong investment.

Keywords: agent-based macroeconomic model, business cycle, hysteresis, stability

Procedia PDF Downloads 189
405 The Language of Science in Higher Education: Related Topics and Discussions

Authors: Gurjeet Singh, Harinder Singh

Abstract:

In this paper, we present "The Language of Science in Higher Education: Related Questions and Discussions". Linguists have written and researched in depth the role of language in science. On this basis, it is clear that language is not just a medium or vehicle for communicating knowledge and ideas. Nor are there mere signs of language knowledge and conversion of ideas into code. In the process of reading and writing, everyone thinks deeply and struggles to understand concepts and make sense. Linguistics play an important role in achieving concepts. In the context of such linguistic diversity, there is no straightforward and simple answer to the question of which language should be the language of advanced science and technology. Many important topics related to this issue are as follows: Involvement in practical or Deep theoretical issues. Languages for the study of science and other subjects. Language issues of science to be considered separate from the development of science, capitalism, colonial history, the worldview of the common man. The democratization of science and technology education in India is possible only by providing maximum reading/resource material in regional languages. The scientific research should be increase to chances of understanding the subject. Multilingual instead or monolingual. As far as deepening the understanding of the subject is concerned, we can shed light on it based on two or three experiences. An attempt was made to make the famous sociological journal Economic and Political Weekly Hindi almost three decades ago. There were many obstacles in this work. The original articles written in Hindi were not found, and the papers and articles of the English Journal were translated into Hindi, and a journal called Sancha was taken out. Equally important is the democratization of knowledge and the deepening of understanding of the subject. However, the question is that if higher education in science is in Hindi or other languages, then it would be a problem to get job. In fact, since independence, English has been dominant in almost every field except literature. There are historical reasons for this, which cannot be reversed. As mentioned above, due to colonial rule, even before independence, English was established as a language of communication, the language of power/status, the language of higher education, the language of administration, and the language of scholarly discourse. After independence, attempts to make Hindi or Hindustani the national language in India were unsuccessful. Given this history and current reality, higher education should be multilingual or at least bilingual. Translation limits should also be increased for those who choose the material for translation. Writing in regional languages on science, making knowledge of various international languages available in Indian languages, etc., is equally important for all to have opportunities to learn English.

Keywords: language, linguistics, literature, culture, ethnography, punjabi, gurmukhi, higher education

Procedia PDF Downloads 68
404 Diamond-Like Carbon-Based Structures as Functional Layers on Shape-Memory Alloy for Orthopedic Applications

Authors: Piotr Jablonski, Krzysztof Mars, Wiktor Niemiec, Agnieszka Kyziol, Marek Hebda, Halina Krawiec, Karol Kyziol

Abstract:

NiTi alloys, possessing unique mechanical properties such as pseudoelasticity and shape memory effect (SME), are suitable for many applications, including implanthology and biomedical devices. Additionally, these alloys have similar values of elastic modulus to those of human bones, what is very important in orthopedics. Unfortunately, the environment of physiological fluids in vivo causes unfavorable release of Ni ions, which in turn may lead to metalosis as well as allergic reactions and toxic effects in the body. For these reasons, the surface properties of NiTi alloys should be improved to increase corrosion resistance, taking into account biological properties, i.e. excellent biocompatibility. The prospective in this respect are layers based on DLC (Diamond-Like Carbon) structures, which are an attractive solution for many applications in implanthology. These coatings (DLC), usually obtained by PVD (Physical Vapour Deposition) and PA CVD (Plasma Activated Chemical Vapour Deposition) methods, can be also modified by doping with other elements like silicon, nitrogen, oxygen, fluorine, titanium and silver. These methods, in combination with a suitably designed structure of the layers, allow the possibility co-decide about physicochemical and biological properties of modified surfaces. Mentioned techniques provide specific physicochemical properties of substrates surface in a single technological process. In this work, the following types of layers based on DLC structures (incl. Si-DLC or Si/N-DLC) were proposed as prospective and attractive approach in surface functionalization of shape memory alloy. Nitinol substrates were modified in plasma conditions, using RF CVD (Radio Frequency Chemical Vapour Deposition). The influence of plasma treatment on the useful properties of modified substrates after deposition DLC layers doped with silica and/or nitrogen atoms, as well as only pre-treated in O2 NH3 plasma atmosphere in a RF reactor was determined. The microstructure and topography of the modified surfaces were characterized using scanning electron microscopy (SEM) and atomic force microscopy (AFM). Furthermore, the atomic structure of coatings was characterized by IR and Raman spectroscopy. The research also included the evaluation of surface wettability, surface energy as well as the characteristics of selected mechanical and biological properties of the layers. In addition, the corrosion properties of alloys after and before modification in the physiological saline were also investigated. In order to determine the corrosion resistance of NiTi in the Ringer solution, the potentiodynamic polarization curves (LSV – Linear Sweep Voltamperometry) were plotted. Furthermore, the evolution of corrosion potential versus immersion time of TiNi alloy in Ringer solution was performed. Based on all carried out research, the usefullness of proposed modifications of nitinol for medical applications was assessed. It was shown, inter alia, that the obtained Si-DLC layers on the surface of NiTi alloy exhibit a characteristic complex microstructure, increased surface development, which is an important aspect in improving the osteointegration of an implant. Furthermore, the modified alloy exhibits biocompatibility, the transfer of the metal (Ni, Ti) to Ringer’s solution is clearly limited.

Keywords: bioactive coatings, corrosion resistance, doped DLC structure, NiTi alloy, RF CVD

Procedia PDF Downloads 204
403 Lessons from Patients Expired due to Severe Head Injuries Treated in Intensive Care Unit of Lady Reading Hospital Peshawar

Authors: Mumtaz Ali, Hamzullah Khan, Khalid Khanzada, Shahid Ayub, Aurangzeb Wazir

Abstract:

Objective: To analyse the death of patients treated in neuro-surgical ICU for severe head injuries from different perspectives. The evaluation of the data so obtained to help improve the health care delivery to this group of patients in ICU. Study Design: It is a descriptive study based on retrospective analysis of patients presenting to neuro-surgical ICU in Lady Reading Hospital, Peshawar. Study Duration: It covered the period between 1st January 2009 to 31st December 2009. Material and Methods: The Clinical record of all the patients presenting with the clinical radiological and surgical features of severe head injuries, who expired in neuro-surgical ICU was collected. A separate proforma which mentioned age, sex, time of arrival and death, causes of head injuries, the radiological features, the clinical parameters, the surgical and non surgical treatment given was used. The average duration of stay and the demographic and domiciliary representation of these patients was noted. The record was analyzed accordingly for discussion and recommendations. Results: Out of the total 112 (n-112) patients who expired in one year in the neuro-surgical ICU the young adults made up the majority 64 (57.14%) followed by children, 34 (30.35%) and then the elderly age group: 10 (8.92%). Road traffic accidents were the major cause of presentation, 75 (66.96%) followed by history of fall; 23 (20.53%) and then the fire arm injuries; 13 (11.60%). The predominant CT scan features of these patients on presentation was cerebral edema, and midline shift (diffuse neuronal injuries). 46 (41.07%) followed by cerebral contusions. 28 (25%). The correctable surgical causes were present only in 18 patients (16.07%) and the majority 94 (83.92%) were given conservative management. Of the 69 (n=69) patients in which CT scan was repeated; 62 (89.85%) showed worsening of the initial CT scan abnormalities while in 7 cases (10.14%) the features were static. Among the non surgical cases both ventilatory therapy in 7 (6.25%) and tracheostomy in 39 (34.82%) failed to change the outcome. The maximum stay in the neuro ICU leading upto the death was 48 hours in 35 (31.25%) cases followed by 31 (27.67%) cases in 24 hours; 24 (21.42%) in one week and 16 (14.28%) in 72 hours. Only 6 (5.35%) patients survived more than a week. Patients were received from almost all the districts of NWFP except. The Hazara division. There were some Afghan refugees as well. Conclusion: Mortality following the head injuries is alarmingly high despite repeated claims about the professional and administrative improvement. Even places like ICU could not change the out come according to the desired aims and objectives in the present set up. A rethinking is needed both at the individual and institutional level among the concerned quarters with a clear aim at the more scientific grounds. Only then one can achieve the desired results.

Keywords: Glasgow Coma Scale, pediatrics, geriatrics, Peshawar

Procedia PDF Downloads 327
402 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 80
401 Degradation Kinetics of Cardiovascular Implants Employing Full Blood and Extra-Corporeal Circulation Principles: Mimicking the Human Circulation In vitro

Authors: Sara R. Knigge, Sugat R. Tuladhar, Hans-Klaus HöFfler, Tobias Schilling, Tim Kaufeld, Axel Haverich

Abstract:

Tissue engineered (TE) heart valves based on degradable electrospun fiber scaffold represent a promising approach to overcome the known limitations of mechanical or biological prostheses. But the mechanical stress in the high-pressure system of the human circulation is a severe challenge for the delicate materials. Hence, the prediction of the scaffolds` in vivo degradation kinetics must be as accurate as possible to prevent fatal events in future animal or even clinical trials. Therefore, this study investigates whether long-term testing in full blood provides more meaningful results regarding the degradation behavior than conventional tests in simulated body fluids (SBF) or Phosphate Buffered Saline (PBS). Fiber mats were produced from a polycaprolactone (PCL)/tetrafluoroethylene solution by electrospinning. The morphology of the fiber mats was characterized via scanning electron microscopy (SEM). A maximum physiological degradation environment utilizing a test set-up with porcine full blood was established. The set-up consists of a reaction vessel, an oxygenator unit, and a roller pump. The blood parameters (pO2, pCO2, temperature, and pH) were monitored with an online test system. All tests were also carried out in the test circuit with SBF and PBS to compare conventional degradation media with the novel full blood setting. The polymer's degradation is quantified by SEM picture analysis, differential scanning calorimetry (DSC), and Raman spectroscopy. Tensile and cyclic loading tests were performed to evaluate the mechanical integrity of the scaffold. Preliminary results indicate that PCL degraded slower in full blood than in SBF and PBS. The uptake of water is more pronounced in the full blood group. Also, PCL preserved its mechanical integrity longer when degraded in full blood. Protein absorption increased during the degradation process. Red blood cells, platelets, and their aggregates adhered on the PCL. Presumably, the degradation led to a more hydrophilic polymeric surface which promoted the protein adsorption and the blood cell adhesion. Testing degradable implants in full blood allows for developing more reliable scaffold materials in the future. Material tests in small and large animal trials thereby can be focused on testing candidates that have proven to function well in an in-vivo-like setting.

Keywords: Electrospun scaffold, full blood degradation test, long-term polymer degradation, tissue engineered aortic heart valve

Procedia PDF Downloads 128
400 Crisis In/Out, Emergent, and Adaptive Urban Organisms

Authors: Alessandra Swiny, Michalis Georgiou, Yiorgos Hadjichristou

Abstract:

This paper focuses on the questions raised through the work of Unit 5: ‘In/Out of crisis, emergent and adaptive’; an architectural research-based studio at the University of Nicosia. It focusses on sustainable architectural and urban explorations tackling with the ever growing crises in its various types, phases and locations. ‘Great crisis situations’ are seen as ‘great chances’ that trigger investigations for further development and evolution of the built environment in an ultimate sustainable approach. The crisis is taken as an opportunity to rethink the urban and architectural directions as new forces for inventions leading to emergent and adaptive built environments. The Unit 5’s identity and environment facilitates the students to respond optimistically, alternatively and creatively towards the global current crisis. Mark Wigley’s notion that “crises are ultimately productive” and “They force invention” intrigued and defined the premises of the Unit. ‘Weather and nature are coauthors of the built environment’ Jonathan Hill states in his ‘weather architecture’ discourse. The weather is constantly changing and new environments, the subnatures are created which derived from the human activities David Gissen explains. The above set of premises triggered innovative responses by the Unit’s students. They thoroughly investigated the various kinds of crisis and their causes in relation to their various types of Terrains. The tools used for the research and investigation were chosen in contradictive pairs to generate further crisis situations: The re-used/salvaged competed with the new, the handmade rivalling with the fabrication, the analogue juxtaposed with digital. Students were asked to delve into state of art technologies in order to propose sustainable emergent and adaptive architectures and Urbanities, having though always in mind that the human and the social aspects of the community should be the core of the investigation. The resulting unprecedented spatial conditions and atmospheres of the emergent new ways of living are deemed to be the ultimate aim of the investigation. Students explored a variety of sites and crisis conditions such as: The vague terrain of the Green Line in Nicosia, the lost footprints of the sinking Venice, the endangered Australian coral reefs, the earthquake torn town of Crevalcore, and the decaying concrete urbanscape of Athens. Among other projects, ‘the plume project’ proposes a cloud-like, floating and almost dream-like living environment with unprecedented spatial conditions to the inhabitants of the coal mine of Centralia, USA, not just to enable them to survive but even to prosper in this unbearable environment due to the process of the captured plumes of smoke and heat. Existing water wells inspire inversed vertical structures creating a new living underground network, protecting the nomads from catastrophic sand storms in the Araoune of Mali. “Inverted utopia: Lost things in the sand”, weaves a series of tea-houses and a library holding lost artifacts and transcripts into a complex underground labyrinth by the utilization of the sand solidification technology. Within this methodology, crisis is seen as a mechanism for allowing an emergence of new and fascinating ultimate sustainable future cultures and cities.

Keywords: adaptive built environments, crisis as opportunity, emergent urbanities, forces for inventions

Procedia PDF Downloads 413
399 Upper Jurassic Foraminiferal Assemblages and Palaeoceanographical Changes in the Central Part of the East European Platform

Authors: Clementine Colpaert, Boris L. Nikitenko

Abstract:

The Upper Jurassic foraminiferal assemblages of the East European Platform have been strongly investigated through the 20th century with biostratigraphical and in smaller degree palaeoecological and palaeobiogeographical purposes. Over the Late Jurassic, the platform was a shallow epicontinental sea that extended from Tethys to the Artic through the Pechora Sea and further toward the northeast in the West Siberian Sea. Foraminiferal assemblages of the Russian Sea were strongly affected by sea-level changes and were controlled by alternated Boreal to Peritethyan influences. The central part of the East European Platform displays very rich and diverse foraminiferal assemblages. Two sections have been analyzed; the Makar'yev Section in the Moscow Depression and the Gorodishi Section in the Yl'yanovsk Depression. Based on the evolution of foraminiferal assemblages, palaeoenvironment has been reconstructed, and sea-level changes have been refined. The aim of this study is to understand palaeoceanographical changes throughout the Oxfordian – Kimmeridgian of the central part of the Russian Sea. The Oxfordian was characterized by a general transgressive event with intermittency of small regressive phases. The platform was connected toward the south with Tethys and Peritethys. During the Middle Oxfordian, opening of a pathway of warmer water from the North-Tethys region to the Boreal Realm favoured the migration of planktonic foraminifera and the appearance of new benthic taxa. It is associated with increased temperature and primary production. During the Late Oxfordian, colder water inputs associated with the microbenthic community crisis may be a response to the closure of this warm-water corridor and the disappearance of planktonic foraminifera. The microbenthic community crisis is probably due to the increased sedimentation rate in the transition from the maximum flooding surface to a second-order regressive event, increasing productivity and inputs of organic matter along with sharp decrease of oxygen into the sediment. It is following during the Early Kimmeridgian by a replacement of foraminiferal assemblages. The almost all Kimmeridgian is characterized by the abundance of many common with Boreal and Subboreal Realm. Connections toward the South began again dominant after a small regressive event recorded during the Late Kimmeridgian and associated with the abundance of many common taxa with Subboreal Realm and Peritethys such as Crimea and Caucasus taxa. Foraminiferal assemblages of the East European Platform are strongly affected by palaeoecological changes and may display a very good model for biofacies typification under Boreal and Subboreal environments. The East European Platform appears to be a key area for the understanding of Upper Jurassic big scale palaeoceanographical changes, being connected with Boreal to Peritethyan basins.

Keywords: foraminifera, palaeoceanography, palaeoecology, upper jurassic

Procedia PDF Downloads 223
398 Raman Tweezers Spectroscopy Study of Size Dependent Silver Nanoparticles Toxicity on Erythrocytes

Authors: Surekha Barkur, Aseefhali Bankapur, Santhosh Chidangil

Abstract:

Raman Tweezers technique has become prevalent in single cell studies. This technique combines Raman spectroscopy which gives information about molecular vibrations, with optical tweezers which use a tightly focused laser beam for trapping the single cells. Thus Raman Tweezers enabled researchers analyze single cells and explore different applications. The applications of Raman Tweezers include studying blood cells, monitoring blood-related disorders, silver nanoparticle-induced stress, etc. There is increased interest in the toxic effect of nanoparticles with an increase in the various applications of nanoparticles. The interaction of these nanoparticles with the cells may vary with their size. We have studied the effect of silver nanoparticles of sizes 10nm, 40nm, and 100nm on erythrocytes using Raman Tweezers technique. Our aim was to investigate the size dependence of the nanoparticle effect on RBCs. We used 785nm laser (Starbright Diode Laser, Torsana Laser Tech, Denmark) for both trapping and Raman spectroscopic studies. 100 x oil immersion objectives with high numerical aperture (NA 1.3) is used to focus the laser beam into a sample cell. The back-scattered light is collected using the same microscope objective and focused into the spectrometer (Horiba Jobin Vyon iHR320 with 1200grooves/mm grating blazed at 750nm). Liquid nitrogen cooled CCD (Symphony CCD-1024x256-OPEN-1LS) was used for signal detection. Blood was drawn from healthy volunteers in vacutainer tubes and centrifuged to separate the blood components. 1.5 ml of silver nanoparticles was washed twice with distilled water leaving 0.1 ml silver nanoparticles in the bottom of the vial. The concentration of silver nanoparticles is 0.02mg/ml so the 0.03mg of nanoparticles will be present in the 0.1 ml nanoparticles obtained. The 25 ul of RBCs were diluted in 2 ml of PBS solution and then treated with 50 ul (0.015mg) of nanoparticles and incubated in CO2 incubator. Raman spectroscopic measurements were done after 24 hours and 48 hours of incubation. All the spectra were recorded with 10mW laser power (785nm diode laser), 60s of accumulation time and 2 accumulations. Major changes were observed in the peaks 565 cm-1, 1211 cm-1, 1224 cm-1, 1371 cm-1, 1638 cm-1. A decrease in intensity of 565 cm-1, increase in 1211 cm-1 with a reduction in 1224 cm-1, increase in intensity of 1371 cm-1 also peak disappearing at 1635 cm-1 indicates deoxygenation of hemoglobin. Nanoparticles with higher size were showing maximum spectral changes. Lesser changes observed in case of 10nm nanoparticle-treated erythrocyte spectra.

Keywords: erythrocytes, nanoparticle-induced toxicity, Raman tweezers, silver nanoparticles

Procedia PDF Downloads 272
397 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 191
396 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 264
395 A Study on Accident Result Contribution of Individual Major Variables Using Multi-Body System of Accident Reconstruction Program

Authors: Donghun Jeong, Somyoung Shin, Yeoil Yun

Abstract:

A large-scale traffic accident refers to an accident in which more than three people die or more than thirty people are dead or injured. In order to prevent a large-scale traffic accident from causing a big loss of lives or establish effective improvement measures, it is important to analyze accident situations in-depth and understand the effects of major accident variables on an accident. This study aims to analyze the contribution of individual accident variables to accident results, based on the accurate reconstruction of traffic accidents using PC-Crash’s Multi-Body, which is an accident reconstruction program, and simulation of each scenario. Multi-Body system of PC-Crash accident reconstruction program is used for multi-body accident reconstruction that shows motions in diverse directions that were not approached previously. MB System is to design and reproduce a form of body, which shows realistic motions, using several bodies. Targeting the 'freight truck cargo drop accident around the Changwon Tunnel' that happened in November 2017, this study conducted a simulation of the freight truck cargo drop accident and analyzed the contribution of individual accident majors. Then on the basis of the driving speed, cargo load, and stacking method, six scenarios were devised. The simulation analysis result displayed that the freight car was driven at a speed of 118km/h(speed limit: 70km/h) right before the accident, carried 196 oil containers with a weight of 7,880kg (maximum load: 4,600kg) and was not fully equipped with anchoring equipment that could prevent a drop of cargo. The vehicle speed, cargo load, and cargo anchoring equipment were major accident variables, and the accident contribution analysis results of individual variables are as follows. When the freight car only obeyed the speed limit, the scattering distance of oil containers decreased by 15%, and the number of dropped oil containers decreased by 39%. When the freight car only obeyed the cargo load, the scattering distance of oil containers decreased by 5%, and the number of dropped oil containers decreased by 34%. When the freight car obeyed both the speed limit and cargo load, the scattering distance of oil containers fell by 38%, and the number of dropped oil containers fell by 64%. The analysis result of each scenario revealed that the overspeed and excessive cargo load of the freight car contributed to the dispersion of accident damage; in the case of a truck, which did not allow a fall of cargo, there was a different type of accident when driven too fast and carrying excessive cargo load, and when the freight car obeyed the speed limit and cargo load, there was the lowest possibility of causing an accident.

Keywords: accident reconstruction, large-scale traffic accident, PC-Crash, MB system

Procedia PDF Downloads 181
394 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 43
393 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.

Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer

Procedia PDF Downloads 126
392 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 70
391 Aeroelastic Stability Analysis in Turbomachinery Using Reduced Order Aeroelastic Model Tool

Authors: Chandra Shekhar Prasad, Ludek Pesek Prasad

Abstract:

In the present day fan blade of aero engine, turboprop propellers, gas turbine or steam turbine low-pressure blades are getting bigger, lighter and thus, become more flexible. Therefore, flutter, forced blade response and vibration related failure of the high aspect ratio blade are of main concern for the designers, thus need to be address properly in order to achieve successful component design. At the preliminary design stage large number of design iteration is need to achieve the utter free safe design. Most of the numerical method used for aeroelastic analysis is based on field-based methods such as finite difference method, finite element method, finite volume method or coupled. These numerical schemes are used to solve the coupled fluid Flow-Structural equation based on full Naiver-Stokes (NS) along with structural mechanics’ equations. These type of schemes provides very accurate results if modeled properly, however, they are computationally very expensive and need large computing recourse along with good personal expertise. Therefore, it is not the first choice for aeroelastic analysis during preliminary design phase. A reduced order aeroelastic model (ROAM) with acceptable accuracy and fast execution is more demanded at this stage. Similar ROAM are being used by other researchers for aeroelastic and force response analysis of turbomachinery. In the present paper new medium fidelity ROAM is successfully developed and implemented in numerical tool to simulated the aeroelastic stability phenomena in turbomachinery and well as flexible wings. In the present, a hybrid flow solver based on 3D viscous-inviscid coupled 3D panel method (PM) and 3d discrete vortex particle method (DVM) is developed, viscous parameters are estimated using boundary layer(BL) approach. This method can simulate flow separation and is a good compromise between accuracy and speed compared to CFD. In the second phase of the research work, the flow solver (PM) will be coupled with ROM non-linear beam element method (BEM) based FEM structural solver (with multibody capabilities) to perform the complete aeroelastic simulation of a steam turbine bladed disk, propellers, fan blades, aircraft wing etc. The partitioned based coupling approach is used for fluid-structure interaction (FSI). The numerical results are compared with experimental data for different test cases and for the blade cascade test case, experimental data is obtained from in-house lab experiments at IT CAS. Furthermore, the results from the new aeroelastic model will be compared with classical CFD-CSD based aeroelastic models. The proposed methodology for the aeroelastic stability analysis of gas turbine or steam turbine blades, or propellers or fan blades will provide researchers and engineers a fast, cost-effective and efficient tool for aeroelastic (classical flutter) analysis for different design at preliminary design stage where large numbers of design iteration are required in short time frame.

Keywords: aeroelasticity, beam element method (BEM), discrete vortex particle method (DVM), classical flutter, fluid-structure interaction (FSI), panel method, reduce order aeroelastic model (ROAM), turbomachinery, viscous-inviscid coupling

Procedia PDF Downloads 244
390 Experimental Study of Impregnated Diamond Bit Wear During Sharpening

Authors: Rui Huang, Thomas Richard, Masood Mostofi

Abstract:

The lifetime of impregnated diamond bits and their drilling efficiency are in part governed by the bit wear conditions, not only the extent of the diamonds’ wear but also their exposure or protrusion out of the matrix bonding. As much as individual diamonds wear, the bonding matrix does also wear through two-body abrasion (direct matrix-rock contact) and three-body erosion (cuttings trapped in the space between rock and matrix). Although there is some work dedicated to the study of diamond bit wear, there is still a lack of understanding on how matrix erosion and diamond exposure relate to the bit drilling response and drilling efficiency, as well as no literature on the process that governs bit sharpening a procedure commonly implemented by drillers when the extent of diamond polishing yield extremely low rate of penetration. The aim of this research is (i) to derive a correlation between the wear state of the bit and the drilling performance but also (ii) to gain a better understanding of the process associated with tool sharpening. The research effort combines specific drilling experiments and precise mapping of the tool-cutting face (impregnated diamond bits and segments). Bit wear is produced by drilling through a rock sample at a fixed rate of penetration for a given period of time. Before and after each wear test, the bit drilling response and thus efficiency is mapped out using a tailored design experimental protocol. After each drilling test, the bit or segment cutting face is scanned with an optical microscope. The test results show that, under the fixed rate of penetration, diamond exposure increases with drilling distance but at a decreasing rate, up to a threshold exposure that corresponds to the optimum drilling condition for this feed rate. The data further shows that the threshold exposure scale with the rate of penetration up to a point where exposure reaches a maximum beyond which no more matrix can be eroded under normal drilling conditions. The second phase of this research focuses on the wear process referred as bit sharpening. Drillers rely on different approaches (increase feed rate or decrease flow rate) with the aim of tearing worn diamonds away from the bit matrix, wearing out some of the matrix, and thus exposing fresh sharp diamonds and recovering a higher rate of penetration. Although a common procedure, there is no rigorous methodology to sharpen the bit and avoid excessive wear or bit damage. This paper aims to gain some insight into the mechanisms that accompany bit sharpening by carefully tracking diamond fracturing, matrix wear, and erosion and how they relate to drilling parameters recorded while sharpening the tool. The results show that there exist optimal conditions (operating parameters and duration of the procedure) for sharpening that minimize overall bit wear and that the extent of bit sharpening can be monitored in real-time.

Keywords: bit sharpening, diamond exposure, drilling response, impregnated diamond bit, matrix erosion, wear rate

Procedia PDF Downloads 78
389 Effect of Human Use, Season and Habitat on Ungulate Densities in Kanha Tiger Reserve

Authors: Neha Awasthi, Ujjwal Kumar

Abstract:

Density of large carnivores is primarily dictated by the density of their prey. Therefore, optimal management of ungulates populations permits harbouring of viable large carnivore populations within protected areas. Ungulate density is likely to respond to regimes of protection and vegetation types. This has generated the need among conservation practitioners to obtain strata specific seasonal species densities for habitat management. Kanha Tiger Reserve (KTR) of 2074 km2 area comprises of two distinct management strata: The core (940 km2), devoid of human settlements and buffer (1134 km2) which is a multiple use area. In general, four habitat strata, grassland, sal forest, bamboo-mixed forest and miscellaneous forest are present in the reserve. Stratified sampling approach was used to access a) impact of human use and b) effect of habitat and season on ungulate densities. Since 2013 to 2016, ungulates were surveyed in winter and summer of each year with an effort of 1200 km walk in 200 spatial transects distributed throughout Kanha Tiger Reserve. We used a single detection function for each species within each habitat stratum for each season for estimating species specific seasonal density, using program DISTANCE. Our key results state that the core area had 4.8 times higher wild ungulate biomass compared with the buffer zone, highlighting the importance of undisturbed area. Chital was found to be most abundant, having a density of 30.1(SE 4.34)/km2 and contributing 33% of the biomass with a habitat preference for grassland. Unlike other ungulates, Gaur being mega herbivore, showed a major seasonal shift in density from bamboo-mixed and sal forest in summer to miscellaneous forest in winter. Maximum diversity and ungulate biomass were supported by grassland followed by bamboo-mixed habitat. Our study stresses the importance of inviolate core areas for achieving high wild ungulate densities and for maintaining populations of endangered and rare species. Grasslands accounts for 9% of the core area of KTR maintained in arrested stage of succession, therefore enhancing this habitat would maintain ungulate diversity, density and cater to the needs of only surviving population of the endangered barasingha and grassland specialist the blackbuck. We show the relevance of different habitat types for differential seasonal use by ungulates and attempt to interpret this in the context of nutrition and cover needs by wild ungulates. Management for an optimal habitat mosaic that maintains ungulate diversity and maximizes ungulate biomass is recommended.

Keywords: distance sampling, habitat management, ungulate biomass, diversity

Procedia PDF Downloads 283
388 Intersection of Racial and Gender Microaggressions: Social Support as a Coping Strategy among Indigenous LGBTQ People in Taiwan

Authors: Ciwang Teyra, A. H. Y. Lai

Abstract:

Introduction: Indigenous LGBTQ individuals face with significant life stress such as racial and gender discrimination and microaggressions, which may lead to negative impacts of their mental health. Although studies relevant to Taiwanese indigenous LGBTQpeople gradually increase, most of them are primarily conceptual or qualitative in nature. This research aims to fulfill the gap by offering empirical quantitative evidence, especially investigating the impact of racial and gender microaggressions on mental health among Taiwanese indigenous LGBTQindividuals with an intersectional perspective, as well as examine whether social support can help them to cope with microaggressions. Methods: Participants were (n=200; mean age=29.51; Female=31%, Male=61%, Others=8%). A cross-sectional quantitative design was implemented using data collected in the year 2020. Standardised measurements was used, including Racial Microaggression Scale (10 items), Gender Microaggression Scale (9 items), Social Support Questionnaire-SF(6 items); Patient Health Questionnaire(9-item); and Generalised Anxiety Disorder(7-item). Covariates were age, gender, and perceived economic hardships. Structural equation modelling (SEM) was employed using Mplus 8.0 with the latent variables of depression and anxiety as outcomes. A main effect SEM model was first established (Model1).To test the moderation effects of perceived social support, an interaction effect model (Model 2) was created with interaction terms entered into Model1. Numerical integration was used with maximum likelihood estimation to estimate the interaction model. Results: Model fit statistics of the Model 1:X2(df)=1308.1 (795), p<.05; CFI/TLI=0.92/0.91; RMSEA=0.06; SRMR=0.06. For Model, the AIC and BIC values of Model 2 improved slightly compared to Model 1(AIC =15631 (Model1) vs. 15629 (Model2); BIC=16098 (Model1) vs. 16103 (Model2)). Model 2 was adopted as the final model. In main effect model 1, racialmicroaggressionand perceived social support were associated with depression and anxiety, but not sexual orientation microaggression(Indigenous microaggression: b = 0.27 for depression; b=0.38 for anxiety; Social support: b=-0.37 for depression; b=-0.34 for anxiety). Thus, an interaction term between social support and indigenous microaggression was added in Model 2. In the final Model 2, indigenous microaggression and perceived social support continues to be statistically significant predictors of both depression and anxiety. Social support moderated the effect of indigenous microaggression of depression (b=-0.22), but not anxiety. All covariates were not statistically significant. Implications: Results indicated that racial microaggressions have a significant impact on indigenous LGBTQ people’s mental health. Social support plays as a crucial role to buffer the negative impact of racial microaggression. To promote indigenous LGBTQ people’s wellbeing, it is important to consider how to support them to develop social support network systems.

Keywords: microaggressions, intersectionality, indigenous population, mental health, social support

Procedia PDF Downloads 124
387 The Impacts of New Digital Technology Transformation on Singapore Healthcare Sector: Case Study of a Public Hospital in Singapore from a Management Accounting Perspective

Authors: Junqi Zou

Abstract:

As one of the world’s most tech-ready countries, Singapore has initiated the Smart Nation plan to harness the full power and potential of digital technologies to transform the way people live and work, through the more efficient government and business processes, to make the economy more productive. The key evolutions of digital technology transformation in healthcare and the increasing deployment of Internet of Things (IoTs), Big Data, AI/cognitive, Robotic Process Automation (RPA), Electronic Health Record Systems (EHR), Electronic Medical Record Systems (EMR), Warehouse Management System (WMS in the most recent decade have significantly stepped up the move towards an information-driven healthcare ecosystem. The advances in information technology not only bring benefits to patients but also act as a key force in changing management accounting in healthcare sector. The aim of this study is to investigate the impacts of digital technology transformation on Singapore’s healthcare sector from a management accounting perspective. Adopting a Balanced Scorecard (BSC) analysis approach, this paper conducted an exploratory case study of a newly launched Singapore public hospital, which has been recognized as amongst the most digitally advanced healthcare facilities in Asia-Pacific region. Specifically, this study gains insights on how the new technology is changing healthcare organizations’ management accounting from four perspectives under the Balanced Scorecard approach, 1) Financial Perspective, 2) Customer (Patient) Perspective, 3) Internal Processes Perspective, and 4) Learning and Growth Perspective. Based on a thorough review of archival records from the government and public, and the interview reports with the hospital’s CIO, this study finds the improvements from all the four perspectives under the Balanced Scorecard framework as follows: 1) Learning and Growth Perspective: The Government (Ministry of Health) works with the hospital to open up multiple training pathways to health professionals that upgrade and develops new IT skills among the healthcare workforce to support the transformation of healthcare services. 2) Internal Process Perspective: The hospital achieved digital transformation through Project OneCare to integrate clinical, operational, and administrative information systems (e.g., EHR, EMR, WMS, EPIB, RTLS) that enable the seamless flow of data and the implementation of JIT system to help the hospital operate more effectively and efficiently. 3) Customer Perspective: The fully integrated EMR suite enhances the patient’s experiences by achieving the 5 Rights (Right Patient, Right Data, Right Device, Right Entry and Right Time). 4) Financial Perspective: Cost savings are achieved from improved inventory management and effective supply chain management. The use of process automation also results in a reduction of manpower costs and logistics cost. To summarize, these improvements identified under the Balanced Scorecard framework confirm the success of utilizing the integration of advanced ICT to enhance healthcare organization’s customer service, productivity efficiency, and cost savings. Moreover, the Big Data generated from this integrated EMR system can be particularly useful in aiding management control system to optimize decision making and strategic planning. To conclude, the new digital technology transformation has moved the usefulness of management accounting to both financial and non-financial dimensions with new heights in the area of healthcare management.

Keywords: balanced scorecard, digital technology transformation, healthcare ecosystem, integrated information system

Procedia PDF Downloads 132
386 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 73
385 The Legal and Regulatory Gaps of Blockchain-Enabled Energy Prosumerism

Authors: Karisma Karisma, Pardis Moslemzadeh Tehrani

Abstract:

This study aims to conduct a high-level strategic dialogue on the lack of consensus, consistency, and legal certainty regarding blockchain-based energy prosumerism so that appropriate institutional and governance structures can be put in place to address the inadequacies and gaps in the legal and regulatory framework. The drive to achieve national and global decarbonization targets is a driving force behind climate goals and policies under the Paris Agreement. In recent years, efforts to ‘demonopolize’ and ‘decentralize’ energy generation and distribution have driven the energy transition toward decentralized systems, invoking concepts such as ownership, sovereignty, and autonomy of RE sources. The emergence of individual and collective forms of prosumerism and the rapid diffusion of blockchain is expected to play a critical role in the decarbonization and democratization of energy systems. However, there is a ‘regulatory void’ relating to individual and collective forms of prosumerism that could prevent the rapid deployment of blockchain systems and potentially stagnate the operationalization of blockchain-enabled energy sharing and trading activities. The application of broad and facile regulatory fixes may be insufficient to address the major regulatory gaps. First, to the authors’ best knowledge, the concepts and elements circumjacent to individual and collective forms of prosumerism have not been adequately described in the legal frameworks of many countries. Second, there is a lack of legal certainty regarding the creation and adaptation of business models in a highly regulated and centralized energy system, which inhibits the emergence of prosumer-driven niche markets. There are also current and prospective challenges relating to the legal status of blockchain-based platforms for facilitating energy transactions, anticipated with the diffusion of blockchain technology. With the rise of prosumerism in the energy sector, the areas of (a) network charges, (b) energy market access, (c) incentive schemes, (d) taxes and levies, and (e) licensing requirements are still uncharted territories in many countries. The uncertainties emanating from this area pose a significant hurdle to the widespread adoption of blockchain technology, a complementary technology that offers added value and competitive advantages for energy systems. The authors undertake a conceptual and theoretical investigation to elucidate the lack of consensus, consistency, and legal certainty in the study of blockchain-based prosumerism. In addition, the authors set an exploratory tone to the discussion by taking an analytically eclectic approach that builds on multiple sources and theories to delve deeper into this topic. As an interdisciplinary study, this research accounts for the convergence of regulation, technology, and the energy sector. The study primarily adopts desk research, which examines regulatory frameworks and conceptual models for crucial policies at the international level to foster an all-inclusive discussion. With their reflections and insights into the interaction of blockchain and prosumerism in the energy sector, the authors do not aim to develop definitive regulatory models or instrument designs, but to contribute to the theoretical dialogue to navigate seminal issues and explore different nuances and pathways. Given the emergence of blockchain-based energy prosumerism, identifying the challenges, gaps and fragmentation of governance regimes is key to facilitating global regulatory transitions.

Keywords: blockchain technology, energy sector, prosumer, legal and regulatory.

Procedia PDF Downloads 163
384 Aerobic Training Combined with Nutritional Guidance as an Effective Strategy for Improving Aerobic Fitness and Reducing BMI in Inactive Adults

Authors: Leif Inge Tjelta, Gerd Lise Nordbotten, Cathrine Nyhus Hagum, Merete Hagen Helland

Abstract:

Overweight and obesity can lead to numerous health problems, and inactive people are more often overweight and obese compared to physically active people. Even a moderate weight loss can improve cardiovascular and endocrine disease risk factors. The aim of the study was to examine to what extent overweight and obese adults starting up with two weekly intensive running sessions had an increase in aerobic capacity, reduction in BMI and waist circumference and changes in body composition after 33 weeks of training. An additional aim was to see if there were differences between participants who, in addition to training, also received lifestyle modification education, including practical cooking (nutritional guidance and training group (NTG =32)) compared to those who were not given any nutritional guidance (training group (TG=40)). 72 participants (49 women), mean age of 46.1 ( ± 10.4) were included. Inclusion Criteria: Previous untrained and inactive adults in all age groups, BMI ≥ 25, desire to become fitter and reduce their BMI. The two weekly supervised training sessions consisted of 10 min warm up followed by 20 to 21 min effective interval running where the participants’ heart rate were between 82 and 92% of hearth rate maximum. The sessions were completed with ten minutes whole body strength training. Measures of BMI, waist circumference (WC) and 3000m running time were performed at the start of the project (T1), after 15 weeks (T2) and at the end of the project (T3). Measurements of fat percentage, muscle mass, and visceral fat were performed at T1 and T3. Twelve participants (9 women) from both groups, who all scored around average on the 3000 m pre-test, were chosen to do a VO₂max test at T1 and T3. The NTG were given ten theoretical sessions (80 minutes each) and eight practical cooking sessions (140 minutes each). There was a significant reduction in bout groups for WC and BMI from T1 to T2. There was not found any further reduction from T2 to T3. Although not significant, NTG reduced their WC more than TG. For both groups, the percentage reduction in WC was similar to the reduction in BMI. There was a decrease in fat percentage in both groups from pre-test to post-test, whereas, for muscle mass, a small, but insignificant increase was observed for both groups. There was a decrease in 3000m running time for both groups from T1 to T2 as well as from T2 to T3. The difference between T2 and T3 was not statistically significant. The 12 participants who tested VO₂max had an increase of 2.86 ( ± 3.84) mlkg⁻¹ min⁻¹ in VO₂max and 3:02 min (± 2:01 min) reduction in running time over 3000 m from T1 until T3. There was a strong, negative correlation between the two variables. The study shows that two intensive running session in 33 weeks can increase aerobic fitness and reduce BMI, WC and fat percent in inactive adults. Cost guidance in addition to training will give additional effect.

Keywords: interval training, nutritional guidance, fitness, BMI

Procedia PDF Downloads 126
383 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer

Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang

Abstract:

In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.

Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide

Procedia PDF Downloads 291
382 Performance Analysis of Double Gate FinFET at Sub-10NM Node

Authors: Suruchi Saini, Hitender Kumar Tyagi

Abstract:

With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.

Keywords: current on-off ratio, FinFET, short-channel effects, transconductance

Procedia PDF Downloads 45