Search results for: radiative transfer
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2875

Search results for: radiative transfer

235 Urban and Building Information Modeling’s Applications for Environmental Education: Case Study of Educational Campuses

Authors: Samar Alarif

Abstract:

Smart sustainable educational campuses are the latest paradigm of innovation in the education domain. Campuses become a hub for sustainable environmental innovations. University has a vital role in paving the road for digital transformations in the infrastructure domain by preparing skilled engineers and specialists. The open digital platform enables smart campuses to simulate real education experience by managing their infrastructure within the curriculums. Moreover, it allows the engagement between governments, businesses, and citizens to push for innovation and sustainable services. Urban and building information modeling platforms have recently attained widespread attention in smart campuses due to their applications and benefits for creating the campus's digital twin in the form of an open digital platform. Qualitative and quantitative strategies were used in directing this research to develop and validate the UIM/BIM platform benefits for smart campuses FM and its impact on the institution's sustainable vision. The research findings are based on literature reviews and case studies of the TU berlin El-Gouna campus. Textual data will be collected using semi-structured interviews with actors, secondary data like BIM course student projects, documents, and publications related to the campus actors. The study results indicated that UIM/BIM has several benefits for the smart campus. Universities can achieve better capacity-building by integrating all the actors in the UIM/BIM process. Universities would achieve their community outreach vision by launching an online outreach of UIM/BIM course for the academic and professional community. The UIM/BIM training courses would integrate students from different disciplines and alumni graduated as well as engineers and planners and technicians. Open platforms enable universities to build a partnership with the industry; companies should be involved in the development of BIM technology courses. The collaboration between academia and the industry would fix the gap, promote the academic courses to reply to the professional requirements, and transfer the industry's academic innovations. In addition to that, the collaboration between academia, industry, government vocational and training centers, and civil society should be promoted by co-creation workshops, a series of seminars, and conferences. These co-creation activities target the capacity buildings and build governmental strategies and policies to support expanding the sustainable innovations and to agree on the expected role of all the stakeholders to support the transformation.

Keywords: smart city, smart educational campus, UIM, urban platforms, sustainable campus

Procedia PDF Downloads 123
234 Nuclear Materials and Nuclear Security in India: A Brief Overview

Authors: Debalina Ghoshal

Abstract:

Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.

Keywords: India, nuclear security, nuclear materials, non proliferation

Procedia PDF Downloads 352
233 Institutional Cooperation to Foster Economic Development: Universities and Social Enterprises

Authors: Khrystyna Pavlyk

Abstract:

In the OECD countries, percentage of adults with higher education degrees has increased by 10 % during 2000-2010. Continuously increasing demand for higher education gives universities a chance of becoming key players in socio-economic development of a territory (region or city) via knowledge creation, knowledge transfer, and knowledge spillovers. During previous decade, universities have tried to support spin-offs and start-ups, introduced courses on sustainability and corporate social responsibility. While much has been done, new trends are starting to emerge in search of better approaches. Recently a number of universities created centers that conduct research in a field social entrepreneurship, which in turn underpin educational programs run at these universities. The list includes but is not limited to the Centre for Social Economy at University of Liège, Institute for Social Innovation at ESADE, Skoll Centre for Social Entrepreneurship at Oxford, Centre for Social Entrepreneurship at Rosklide, Social Entrepreneurship Initiative at INSEAD. Existing literature already examined social entrepreneurship centers in terms of position in the institutional structure, initial and additional funding, teaching initiatives, research achievements, and outreach activities. At the same time, Universities can become social enterprises themselves. Previous research revealed that universities use both business and social entrepreneurship models. Universities which are mainly driven by a social mission are more likely to transform into social entrepreneurial institutions. At the same time, currently, there is no clear understanding of what social entrepreneurship in higher education is about and thus social entrepreneurship in higher education needs to be studied and promoted at the same time. Main roles which socially oriented university can play in city development include: buyer (implementation of socially focused local procurement programs creates partnerships focused on local sustainable growth.); seller (centers created by universities can sell socially oriented goods and services, e.g. in consultancy.); employer (Universities can employ socially vulnerable groups.); business incubator (which will help current student to start their social enterprises). In the paper, we will analyze these in more detail. We will also examine a number of indicators that can be used to assess the impact, both direct and indirect, that universities can have on city's economy. At the same time, originality of this paper mainly lies not in methodological approaches used, but in countries evaluated. Social entrepreneurship is still treated as a relatively new phenomenon in post-transitional countries where social services were provided only by the state for many decades. Paper will provide data and example’s both from developed countries (the US and EU), and those located in CIS and CEE region.

Keywords: social enterprise, university, regional economic development, comparative study

Procedia PDF Downloads 254
232 Corpus Linguistics as a Tool for Translation Studies Analysis: A Bilingual Parallel Corpus of Students’ Translations

Authors: Juan-Pedro Rica-Peromingo

Abstract:

Nowadays, corpus linguistics has become a key research methodology for Translation Studies, which broadens the scope of cross-linguistic studies. In the case of the study presented here, the approach used focuses on learners with little or no experience to study, at an early stage, general mistakes and errors, the correct or incorrect use of translation strategies, and to improve the translational competence of the students. Led by Sylviane Granger and Marie-Aude Lefer of the Centre for English Corpus Linguistics of the University of Louvain, the MUST corpus (MUltilingual Student Translation Corpus) is an international project which brings together partners from Europe and worldwide universities and connects Learner Corpus Research (LCR) and Translation Studies (TS). It aims to build a corpus of translations carried out by students including both direct (L2 > L1) an indirect (L1 > L2) translations, from a great variety of text types, genres, and registers in a wide variety of languages: audiovisual translations (including dubbing, subtitling for hearing population and for deaf population), scientific, humanistic, literary, economic and legal translation texts. This paper focuses on the work carried out by the Spanish team from the Complutense University (UCMA), which is part of the MUST project, and it describes the specific features of the corpus built by its members. All the texts used by UCMA are either direct or indirect translations between English and Spanish. Students’ profiles comprise translation trainees, foreign language students with a major in English, engineers studying EFL and MA students, all of them with different English levels (from B1 to C1); for some of the students, this would be their first experience with translation. The MUST corpus is searchable via Hypal4MUST, a web-based interface developed by Adam Obrusnik from Masaryk University (Czech Republic), which includes a translation-oriented annotation system (TAS). A distinctive feature of the interface is that it allows source texts and target texts to be aligned, so we can be able to observe and compare in detail both language structures and study translation strategies used by students. The initial data obtained point out the kind of difficulties encountered by the students and reveal the most frequent strategies implemented by the learners according to their level of English, their translation experience and the text genres. We have also found common errors in the graduate and postgraduate university students’ translations: transfer errors, lexical errors, grammatical errors, text-specific translation errors, and cultural-related errors have been identified. Analyzing all these parameters will provide more material to bring better solutions to improve the quality of teaching and the translations produced by the students.

Keywords: corpus studies, students’ corpus, the MUST corpus, translation studies

Procedia PDF Downloads 147
231 Developing and Shake Table Testing of Semi-Active Hydraulic Damper as Active Interaction Control Device

Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung

Abstract:

Semi-active control system for structure under excitation of earthquake provides with the characteristics of being adaptable and requiring low energy. DSHD (Displacement Semi-Active Hydraulic Damper) was developed by our research team. Shake table test results of this DSHD installed in full scale test structure demonstrated that this device brought its energy-dissipating performance into full play for test structure under excitation of earthquake. The objective of this research is to develop a new AIC (Active Interaction Control Device) and apply shake table test to perform its dissipation of energy capability. This new proposed AIC is converting an improved DSHD (Displacement Semi-Active Hydraulic Damper) to AIC with the addition of an accumulator. The main concept of this energy-dissipating AIC is to apply the interaction function of affiliated structure (sub-structure) and protected structure (main structure) to transfer the input seismic force into sub-structure to reduce the structural deformation of main structure. This concept is tested using full-scale multi-degree of freedoms test structure, installed with this proposed AIC subjected to external forces of various magnitudes, for examining the shock absorption influence of predictive control, stiffness of sub-structure, synchronous control, non-synchronous control and insufficient control position. The test results confirm: (1) this developed device is capable of diminishing the structural displacement and acceleration response effectively; (2) the shock absorption of low precision of semi-active control method did twice as much seismic proof efficacy as that of passive control method; (3) active control method may not exert a negative influence of amplifying acceleration response of structure; (4) this AIC comes into being time-delay problem. It is the same problem of ordinary active control method. The proposed predictive control method can overcome this defect; (5) condition switch is an important characteristics of control type. The test results show that synchronism control is very easy to control and avoid stirring high frequency response. This laboratory results confirm that the device developed in this research is capable of applying the mutual interaction between the subordinate structure and the main structure to be protected is capable of transforming the quake energy applied to the main structure to the subordinate structure so that the objective of minimizing the deformation of main structural can be achieved.

Keywords: DSHD (Displacement Semi-Active Hydraulic Damper), AIC (Active Interaction Control Device), shake table test, full scale structure test, sub-structure, main-structure

Procedia PDF Downloads 519
230 Innovative Fabric Integrated Thermal Storage Systems and Applications

Authors: Ahmed Elsayed, Andrew Shea, Nicolas Kelly, John Allison

Abstract:

In northern European climates, domestic space heating and hot water represents a significant proportion of total primary total primary energy use and meeting these demands from a national electricity grid network supplied by renewable energy sources provides an opportunity for a significant reduction in EU CO2 emissions. However, in order to adapt to the intermittent nature of renewable energy generation and to avoid co-incident peak electricity usage from consumers that may exceed current capacity, the demand for heat must be decoupled from its generation. Storage of heat within the fabric of dwellings for use some hours, or days, later provides a route to complete decoupling of demand from supply and facilitates the greatly increased use of renewable energy generation into a local or national electricity network. The integration of thermal energy storage into the building fabric for retrieval at a later time requires much evaluation of the many competing thermal, physical, and practical considerations such as the profile and magnitude of heat demand, the duration of storage, charging and discharging rate, storage media, space allocation, etc. In this paper, the authors report investigations of thermal storage in building fabric using concrete material and present an evaluation of several factors that impact upon performance including heating pipe layout, heating fluid flow velocity, storage geometry, thermo-physical material properties, and also present an investigation of alternative storage materials and alternative heat transfer fluids. Reducing the heating pipe spacing from 200 mm to 100 mm enhances the stored energy by 25% and high-performance Vacuum Insulation results in heat loss flux of less than 3 W/m2, compared to 22 W/m2 for the more conventional EPS insulation. Dense concrete achieved the greatest storage capacity, relative to medium and light-weight alternatives, although a material thickness of 100 mm required more than 5 hours to charge fully. Layers of 25 mm and 50 mm thickness can be charged in 2 hours, or less, facilitating a fast response that could, aggregated across multiple dwellings, provide significant and valuable reduction in demand from grid-generated electricity in expected periods of high demand and potentially eliminate the need for additional new generating capacity from conventional sources such as gas, coal, or nuclear.

Keywords: fabric integrated thermal storage, FITS, demand side management, energy storage, load shifting, renewable energy integration

Procedia PDF Downloads 166
229 Development and Characterization of Novel Topical Formulation Containing Niacinamide

Authors: Sevdenur Onger, Ali Asram Sagiroglu

Abstract:

Hyperpigmentation is a cosmetically unappealing skin problem caused by an overabundance of melanin in the skin. Its pathophysiology is caused by melanocytes being exposed to paracrine melanogenic stimuli, which can upregulate melanogenesis-related enzymes (such as tyrosinase) and cause melanosome formation. Tyrosinase is linked to the development of melanosomes biochemically, and it is the main target of hyperpigmentation treatment. therefore, decreasing tyrosinase activity to reduce melanosomes has become the main target of hyperpigmentation treatment. Niacinamide (NA) is a natural chemical found in a variety of plants that is used as a skin-whitening ingredient in cosmetic formulations. NA decreases melanogenesis in the skin by inhibiting melanosome transfer from melanocytes to covering keratinocytes. Furthermore, NA protects the skin from reactive oxygen species and acts as a main barrier with the skin, reducing moisture loss by increasing ceramide and fatty acid synthesis. However, it is very difficult for hydrophilic compounds such as NA to penetrate deep into the skin. Furthermore, because of the nicotinic acid in NA, it is an irritant. As a result, we've concentrated on strategies to increase NA skin permeability while avoiding its irritating impacts. Since nanotechnology can affect drug penetration behavior by controlling the release and increasing the period of permanence on the skin, it can be a useful technique in the development of whitening formulations. Liposomes have become increasingly popular in the cosmetics industry in recent years due to benefits such as their lack of toxicity, high penetration ability in living skin layers, ability to increase skin moisture by forming a thin layer on the skin surface, and suitability for large-scale production. Therefore, liposomes containing NA were developed for this study. Different formulations were prepared by varying the amount of phospholipid and cholesterol and examined in terms of particle sizes, polydispersity index (PDI) and pH values. The pH values of the produced formulations were determined to be suitable with the pH value of the skin. Particle sizes were determined to be smaller than 250 nm and the particles were found to be of homogeneous size in the formulation (pdi<0.30). Despite the important advantages of liposomal systems, they have low viscosity and stability for topical use. For these reasons, in this study, liposomal cream formulations have been prepared for easy topical application of liposomal systems. As a result, liposomal cream formulations containing NA have been successfully prepared and characterized. Following the in-vitro release and ex-vivo diffusion studies to be conducted in the continuation of the study, it is planned to test the formulation that gives the most appropriate result on the volunteers after obtaining the approval of the ethics committee.

Keywords: delivery systems, hyperpigmentation, liposome, niacinamide

Procedia PDF Downloads 112
228 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer

Authors: Aprajeeta Jha, Punyadarshini P. Tripathy

Abstract:

Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.

Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer

Procedia PDF Downloads 150
227 The Validation of RadCalc for Clinical Use: An Independent Monitor Unit Verification Software

Authors: Junior Akunzi

Abstract:

In the matter of patient treatment planning quality assurance in 3D conformational therapy (3D-CRT) and volumetric arc therapy (VMAT or RapidArc), the independent monitor unit verification calculation (MUVC) is an indispensable part of the process. Concerning 3D-CRT treatment planning, the MUVC can be performed manually applying the standard ESTRO formalism. However, due to the complex shape and the amount of beams in advanced treatment planning technic such as RapidArc, the manual independent MUVC is inadequate. Therefore, commercially available software such as RadCalc can be used to perform the MUVC in complex treatment planning been. Indeed, RadCalc (version 6.3 LifeLine Inc.) uses a simplified Clarkson algorithm to compute the dose contribution for individual RapidArc fields to the isocenter. The purpose of this project is the validation of RadCalc in 3D-CRT and RapidArc for treatment planning dosimetry quality assurance at Antoine Lacassagne center (Nice, France). Firstly, the interfaces between RadCalc and our treatment planning systems (TPS) Isogray (version 4.2) and Eclipse (version13.6) were checked for data transfer accuracy. Secondly, we created test plans in both Isogray and Eclipse featuring open fields, wedges fields, and irregular MLC fields. These test plans were transferred from TPSs according to the radiotherapy protocol of DICOM RT to RadCalc and the linac via Mosaiq (version 2.5). Measurements were performed in water phantom using a PTW cylindrical semiflex ionisation chamber (0.3 cm³, 31010) and compared with the TPSs and RadCalc calculation. Finally, 30 3D-CRT plans and 40 RapidArc plans created with patients CT scan were recalculated using the CT scan of a solid PMMA water equivalent phantom for 3D-CRT and the Octavius II phantom (PTW) CT scan for RapidArc. Next, we measure the doses delivered into these phantoms for each plan with a 0.3 cm³ PTW 31010 cylindrical semiflex ionisation chamber (3D-CRT) and 0.015 cm³ PTW PinPoint ionisation chamber (Rapidarc). For our test plans, good agreements were found between calculation (RadCalc and TPSs) and measurement (mean: 1.3%; standard deviation: ± 0.8%). Regarding the patient plans, the measured doses were compared to the calculation in RadCalc and in our TPSs. Moreover, RadCalc calculations were compared to Isogray and Eclispse ones. Agreements better than (2.8%; ± 1.2%) were found between RadCalc and TPSs. As for the comparison between calculation and measurement the agreement for all of our plans was better than (2.3%; ± 1.1%). The independent MU verification calculation software RadCal has been validated for clinical use and for both 3D-CRT and RapidArc techniques. The perspective of this project includes the validation of RadCal for the Tomotherapy machine installed at centre Antoine Lacassagne.

Keywords: 3D conformational radiotherapy, intensity modulated radiotherapy, monitor unit calculation, dosimetry quality assurance

Procedia PDF Downloads 216
226 Efficacy of Preimplantation Genetic Screening in Women with a Spontaneous Abortion History with Eukaryotic or Aneuploidy Abortus

Authors: Jayeon Kim, Eunjung Yu, Taeki Yoon

Abstract:

Most spontaneous miscarriage is believed to be a consequence of embryo aneuploidies. Transferring eukaryotic embryos selected by PGS is expected to decrease the miscarriage rate. Current PGS indications include advanced maternal age, recurrent pregnancy loss, repeated implantation failure. Recently, use of PGS for healthy women without above indications for the purpose of improving in vitro fertilization (IVF) outcomes is on the rise. However, it is still controversy about the beneficial effect of PGS in this population, especially, in women with a history of no more than 2 miscarriages or miscarriage of eukaryotic abortus. This study aimed to investigate if karyotyping result of abortus is a good indicator of preimplantation genetic screening (PGS) in subsequent IVF cycle in women with a history of spontaneous abortion. A single-center retrospective cohort study was performed. Women who had spontaneous abortion(s) (less than 3) and dilatation and evacuation, and subsequent IVF from January 2016 to November 2016 were included. Their medical information was extracted from the charts. Clinical pregnancy was defined as presence of a gestational sac with fetal heart beat detected on ultrasound in week 7. Statistical analysis was performed using SPSS software. Total 234 women were included. 121 out of 234 (51.7%) underwent karyotyping of the abortus, and 113 did not have the abortus karyotyped. Embryo biopsy was performed on 3 or 5 days after oocyte retrieval, followed by embryo transfer (ET) on a fresh or frozen cycle. The biopsied materials were subjected to microarray comparative genomic hybridization. Clinical pregnancy rate per ET was compared between PGS and non-PGS group in each study group. Patients were grouped by two criteria: karyotype of the abortus from previous miscarriage (unknown fetal karyotype (n=89, Group 1), eukaryotic abortus (n=36, Group 2) or aneuploidy abortus (n=67, Group 3)), and pursuing PGS in subsequent IVF cycle (pursuing PGS (PGS group, n=105) or not pursuing PGS (non-PGS group, n=87)). The PGS group was significantly older and had higher number of retrieved oocytes and prior miscarriages compared to non-PGS group. There were no differences in BMI and AMH level between those two groups. In PGS group, the mean number of transferable embryos (eukaryotic embryo) was 1.3 ± 0.7, 1.5 ± 0.5 and 1.4 ± 0.5, respectively (p = 0.049). In 42 cases, ET was cancelled because all embryos biopsied turned out to be abnormal. In all three groups (group 1, 2, and 3), clinical pregnancy rates were not statistically different between PGS and non-PGS group (Group 1: 48.8% vs. 52.2% (p=0.858), Group 2: 70% vs. 73.1% (p=0.730), Group 3: 42.3% vs. 46.7% (p=0.640), in PGS and non-PGS group, respectively). In both groups who had miscarriage with eukaryotic and aneuploidy abortus, the clinical pregnancy rate between IVF cycles with and without PGS was not different. When we compare miscarriage and ongoing pregnancy rate, there were no significant differences between PGS and non-PGS group in all three groups. Our results show that the routine application of PGS in women who had less than 3 miscarriages would not be beneficial, even in cases that previous miscarriage had been caused by fetal aneuploidy.

Keywords: preimplantation genetic diagnosis, miscarriage, kpryotyping, in vitro fertilization

Procedia PDF Downloads 181
225 Development of Social Competence in the Preparation and Continuing Training of Adult Educators

Authors: Genute Gedviliene, Vidmantas Tutlys

Abstract:

The aim of this paper is to reveal the deployment and development of the social competence in the higher education programmes of adult education and in the continuing training and competence development of the andragogues. There will be compared how the issues of cooperation and communication in the learning and teaching processes are treated in the study programmes and in the courses of continuing training of andragogues. Theoretical and empirical research methods were combined for research analysis. For the analysis the following methods were applied: 1) Literature and document analysis helped to highlight the communication and cooperation as fundamental phenomena of the social competence, it’s important for the adult education in the context of digitalization and globalization. There were also analyzed the research studies on the development of social competence in the field of andragogy, as well as on the place and weight of the social competence in the overall competence profile of the andragogue. 2) The empirical study is based on questionnaire survey method. The population of survey consists of 240 students of bachelor and master degree studies of andragogy in Lithuania and of 320 representatives of the different bodies and institutions involved in the continuing training and professional development of the adult educators in Lithuania. The themes of survey questionnaire were defined on the basis of findings of the literature review and included the following: 1) opinions of the respondents on the role and place of a social competence in the work of andragogue; 2) opinions of the respondents on the role and place of the development of social competence in the curricula of higher education studies and continuing training courses; 3) judgements on the implications of the higher education studies and courses of continuing training for the development of social competence and it’s deployment in the work of andragogue. Data analysis disclosed a wide range of ways and modalities of the deployment and development of social competence in the preparation and continuing training of the adult educators. Social competence is important for the students and adult education providers not only as the auxiliary capability for the communication and transfer of information, but also as the outcome of collective learning leading to the development of new capabilities applied by the learners in the learning process, their professional field of adult education and their social life. Equally so, social competence is necessary for the effective adult education activities not only as an auxiliary capacity applied in the teaching process, but also as a potential for improvement, development and sustainability of the didactic competence and know-how in this field. The students of the higher education programmes in the field of adult education treat social competence as important generic capacity important for the work of adult educator, whereas adult education providers discern the concrete issues of application of social competence in the different processes of adult education, starting from curriculum design and ending with assessment of learning outcomes.

Keywords: adult education, andragogues, social competence, curriculum

Procedia PDF Downloads 142
224 Operation Cycle Model of ASz62IR Radial Aircraft Engine

Authors: M. Duk, L. Grabowski, P. Magryta

Abstract:

Today's very important element relating to air transport is the environment impact issues. Nowadays there are no emissions standards for turbine and piston engines used in air transport. However, it should be noticed that the environmental effect in the form of exhaust gases from aircraft engines should be as small as possible. For this purpose, R&D centers often use special software to simulate and to estimate the negative effect of engine working process. For cooperation between the Lublin University of Technology and the Polish aviation company WSK "PZL-KALISZ" S.A., to achieve more effective operation of the ASz62IR engine, one of such tools have been used. The AVL Boost software allows to perform 1D simulations of combustion process of piston engines. ASz62IR is a nine-cylinder aircraft engine in a radial configuration. In order to analyze the impact of its working process on the environment, the mathematical model in the AVL Boost software have been made. This model contains, among others, model of the operation cycle of the cylinders. This model was based on a volume change in combustion chamber according to the reciprocating movement of a piston. The simplifications that all of the pistons move identically was assumed. The changes in cylinder volume during an operating cycle were specified. Those changes were important to determine the energy balance of a cylinder in an internal combustion engine which is fundamental for a model of the operating cycle. The calculations for cylinder thermodynamic state were based on the first law of thermodynamics. The change in the mass in the cylinder was calculated from the sum of inflowing and outflowing masses including: cylinder internal energy, heat from the fuel, heat losses, mass in cylinder, cylinder pressure and volume, blowdown enthalpy, evaporation heat etc. The model assumed that the amount of heat released in combustion process was calculated from the pace of combustion, using Vibe model. For gas exchange, it was also important to consider heat transfer in inlet and outlet channels because of much higher values there than for flow in a straight pipe. This results from high values of heat exchange coefficients and temperature coefficients near valves and valve seats. A Zapf modified model of heat exchange was used. To use the model with the flight scenarios, the impact of flight altitude on engine performance has been analyze. It was assumed that the pressure and temperature at the inlet and outlet correspond to the values resulting from the model for International Standard Atmosphere (ISA). Comparing this model of operation cycle with the others submodels of the ASz62IR engine, it could be noticed, that a full analysis of the performance of the engine, according to the ISA conditions, can be made. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under

Keywords: aviation propulsion, AVL Boost, engine model, operation cycle, aircraft engine

Procedia PDF Downloads 292
223 Performance Analysis of Double Gate FinFET at Sub-10NM Node

Authors: Suruchi Saini, Hitender Kumar Tyagi

Abstract:

With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.

Keywords: current on-off ratio, FinFET, short-channel effects, transconductance

Procedia PDF Downloads 61
222 Lightweight Sheet Molding Compound Composites by Coating Glass Fiber with Cellulose Nanocrystals

Authors: Amir Asadi, Karim Habib, Robert J. Moon, Kyriaki Kalaitzidou

Abstract:

There has been considerable interest in cellulose nanomaterials (CN) as polymer and polymer composites reinforcement due to their high specific modulus and strength, low density and toxicity, and accessible hydroxyl side groups that can be readily chemically modified. The focus of this study is making lightweight composites for better fuel efficiency and lower CO2 emission in auto industries with no compromise on mechanical performance using a scalable technique that can be easily integrated in sheet molding compound (SMC) manufacturing lines. Light weighting will be achieved by replacing part of the heavier components, i.e. glass fibers (GF), with a small amount of cellulose nanocrystals (CNC) in short GF/epoxy composites made using SMC. CNC will be introduced as coating of the GF rovings prior to their use in the SMC line. The employed coating method is similar to the fiber sizing technique commonly used and thus it can be easily scaled and integrated to industrial SMC lines. This will be an alternative route to the most techniques that involve dispersing CN in polymer matrix, in which the nanomaterials agglomeration limits the capability for scaling up in an industrial production. We have demonstrated that incorporating CNC as a coating on GF surface by immersing the GF in CNC aqueous suspensions, a simple and scalable technique, increases the interfacial shear strength (IFSS) by ~69% compared to the composites produced by uncoated GF, suggesting an enhancement of stress transfer across the GF/matrix interface. As a result of IFSS enhancement, incorporation of 0.17 wt% CNC in the composite results in increases of ~10% in both elastic modulus and tensile strength, and 40 % and 43 % in flexural modulus and strength respectively. We have also determined that dispersing 1.4 and 2 wt% CNC in the epoxy matrix of short GF/epoxy SMC composites by sonication allows removing 10 wt% GF with no penalty on tensile and flexural properties leading to 7.5% lighter composites. Although sonication is a scalable technique, it is not quite as simple and inexpensive as coating the GF by passing through an aqueous suspension of CNC. In this study, the above findings are integrated to 1) investigate the effect of CNC content on mechanical properties by passing the GF rovings through CNC aqueous suspension with various concentrations (0-5%) and 2) determine the optimum ratio of the added CNC to the removed GF to achieve the maximum possible weight reduction with no cost on mechanical performance of the SMC composites. The results of this study are of industrial relevance, providing a path toward producing high volume lightweight and mechanically enhanced SMC composites using cellulose nanomaterials.

Keywords: cellulose nanocrystals, light weight polymer-matrix composites, mechanical properties, sheet molding compound (SMC)

Procedia PDF Downloads 225
221 Solar Cell Packed and Insulator Fused Panels for Efficient Cooling in Cubesat and Satellites

Authors: Anand K. Vinu, Vaishnav Vimal, Sasi Gopalan

Abstract:

All spacecraft components have a range of allowable temperatures that must be maintained to meet survival and operational requirements during all mission phases. Due to heat absorption, transfer, and emission on one side, the satellite surface presents an asymmetric temperature distribution and causes a change in momentum, which can manifest in spinning and non-spinning satellites in different manners. This problem can cause orbital decays in satellites which, if not corrected, will interfere with its primary objective. The thermal analysis of any satellite requires data from the power budget for each of the components used. This is because each of the components has different power requirements, and they are used at specific times in an orbit. There are three different cases that are run, one is the worst operational hot case, the other one is the worst non-operational cold case, and finally, the operational cold case. Sunlight is a major source of heating that takes place on the satellite. The way in which it affects the spacecraft depends on the distance from the Sun. Any part of a spacecraft or satellite facing the Sun will absorb heat (a net gain), and any facing away will radiate heat (a net loss). We can use the state-of-the-art foldable hybrid insulator/radiator panel. When the panels are opened, that particular side acts as a radiator for dissipating the heat. Here the insulator, in our case, the aerogel, is sandwiched with solar cells and radiator fins (solar cells outside and radiator fins inside). Each insulated side panel can be opened and closed using actuators depending on the telemetry data of the CubeSat. The opening and closing of the panels are dependent on the special code designed for this particular application, where the computer calculates where the Sun is relative to the satellites. According to the data obtained from the sensors, the computer decides which panel to open and by how many degrees. For example, if the panels open 180 degrees, the solar panels will directly face the Sun, in turn increasing the current generator of that particular panel. One example is when one of the corners of the CubeSat is facing or if more than one side is having a considerable amount of sun rays incident on it. Then the code will analyze the optimum opening angle for each panel and adjust accordingly. Another means of cooling is the passive way of cooling. It is the most suitable system for a CubeSat because of its limited power budget constraints, low mass requirements, and less complex design. Other than this fact, it also has other advantages in terms of reliability and cost. One of the passive means is to make the whole chase act as a heat sink. For this, we can make the entire chase out of heat pipes and connect the heat source to this chase with a thermal strap that transfers the heat to the chassis.

Keywords: passive cooling, CubeSat, efficiency, satellite, stationary satellite

Procedia PDF Downloads 100
220 The Effectiveness of Multiphase Flow in Well- Control Operations

Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia

Abstract:

Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.

Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic

Procedia PDF Downloads 118
219 Teacher Professional Development in Saudi Arabia through the Implementation of Universal Design for Learning

Authors: Majed A. Alsalem

Abstract:

Universal Design for Learning (UDL) is common theme in education across the US and an influential model and framework that enables students in general and particularly students who are deaf and hard of hearing (DHH) to access the general education curriculum. UDL helps teachers determine how information will be presented to students and how to keep students engaged. Moreover, UDL helps students to express their understanding and knowledge to others. UDL relies on technology to promote students' interaction with content and their communication of knowledge. This study included 120 DHH students who received daily instruction based on UDL principles. This study presents the results of the study and discusses its implications for the integration of UDL in day-to-day practice as well as in the country's education policy. UDL is a Western concept that began and grew in the US, and it has just begun to transfer to other countries such as Saudi Arabia. It will be very important to researchers, practitioners, and educators to see how UDL is being implemented in a new place with a different culture. UDL is a framework that is built to provide multiple means of engagement, representation, and action and expression that should be part of curricula and lessons for all students. The purpose of this study is to investigate the variables associated with the implementation of UDL in Saudi Arabian schools and identify the barriers that could prevent the implementation of UDL. Therefore, this study used a mixed methods design that use both quantitative and qualitative methods. More insights will be gained by including both quantitative and qualitative rather than using a single method. By having methods that different concepts and approaches, the databases will be enriched. This study uses levels of collecting date through two stages in order to insure that the data comes from multiple ways to mitigate validity threats and establishing trustworthiness in the findings. The rationale and significance of this study is that it will be the first known research that targets UDL in Saudi Arabia. Furthermore, it will deal with UDL in depth to set the path for further studies in the Middle East. From a perspective of content, this study considers teachers’ implementation knowledge, skills, and concerns of implementation. This study deals with effective instructional designs that have not been presented in any conferences, workshops, teacher preparation and professional development programs in Saudi Arabia. Specifically, Saudi Arabian schools are challenged to design inclusive schools and practices as well as to support all students’ academic skills development. The total participants in stage one were 336 teachers of DHH students. The results of the intervention indicated significant differences among teachers before and after taking the training sessions associated with their understanding and level of concern. Teachers have indicated interest in knowing more about UDL and adopting it into their practices; they reported that UDL has benefits that will enhance their performance for supporting student learning.

Keywords: deaf and hard of hearing, professional development, Saudi Arabia, universal design for learning

Procedia PDF Downloads 432
218 New Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator

Authors: Wedad Albalawi

Abstract:

The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques, and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then, dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is an arbitrary nonempty closed subset of the real numbers. Then, the dynamic inequalities on time scales have received a lot of attention in the literature and has become a major field in pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on Hardy and Coposon inequalities, using Steklov operator on time scale in double integrals to obtain special cases of time-scale inequalities of Hardy and Copson on high dimensions. The advantage of this study is that it uses the one-dimensional classical Hardy inequality to obtain higher dimensional on time scale versions that will be applied in the solution of the Cauchy problem for the wave equation. In addition, the obtained inequalities have various applications involving discontinuous domains such as bug populations, phytoremediation of metals, wound healing, maximization problems. The proof can be done by introducing restriction on the operator in several cases. The concepts in time scale version such as time scales calculus will be used that allows to unify and extend many problems from the theories of differential and of difference equations. In addition, using chain rule, and some properties of multiple integrals on time scales, some theorems of Fubini and the inequality of H¨older.

Keywords: time scales, inequality of hardy, inequality of coposon, steklov operator

Procedia PDF Downloads 95
217 A Study on the Acquisition of Chinese Classifiers by Vietnamese Learners

Authors: Quoc Hung Le Pham

Abstract:

In the field of language study, classifier is an interesting research feature. In the world’s languages, some languages have classifier system, some do not. Mandarin Chinese and Vietnamese languages are a rich classifier system, however, because of the language system, the cognitive, cultural differences, so that the syntactic structure of classifier of them also dissimilar. When using Mandarin Chinese classifiers must collocate with nouns or verbs, in the lexical category it is not like nouns or verbs, belong to the open class. But some scholars believe that Mandarin Chinese measure words are similar to English and other Indo European languages. The word hanging on the structure and word formation (suffix), is a closed class. Compared to other languages, such as Chinese, Vietnamese, Thai and other Asian languages are still belonging to the classifier language’s second type, this type of language is classifier, it is in the majority of quantity must exist, and following deictic, anaphoric or quantity appearing together, not separation between its modified noun, also known as numeral classifier language. Main syntactic structure of Chinese classifiers are as follows: ‘quantity+measure+noun’, ‘pronoun+measure+noun’, ‘pronoun+quantity+measure+noun’, ‘prefix+quantity+measure +noun’, ‘quantity +adjective + measure +noun’, ‘ quantity (above 10 whole number), + duo (多)measure +noun’, ‘ quantity (around 10) + measure + duo (多) +noun’. Main syntactic structure of Vietnamese classifiers are: ‘quantity+measure+noun’, ‘ measure+noun+pronoun’, ‘quantity+measure+noun+pronoun’, ‘measure+noun+prefix+ quantity’, ‘quantity+measure+noun+adjective', ‘duo (多) +quanlity+measure+noun’, ‘quantity+measure+adjective+pronoun (quantity word could not be 1)’, ‘measure+adjective+pronoun’, ‘measure+pronoun’. In daily life, classifiers are commonly used, if Chinese learners failed to standardize this using catergory, because the negative impact might occur on their verbal communication. The richness of the Chinese classifier system contributes to the complexity in the study of the system by foreign learners, especially in the inter language of Vietnamese learners. As above mentioned, Vietnamese language also has a rich system of classifiers, however, the basic structure order of two languages are similar but both still have differences. These similarities and dissimilarities between Chinese and Vietnamese classifier systems contribute significantly to the common errors made by Vietnamese students while they acquire Chinese, which are distinct from the errors made by students from the other language background. This article from a comparative perspective of language, has an orientation towards Chinese and Vietnamese languages commonly used in classifiers semantics and structural form two aspects. This comparative study aims to identity Vietnamese students while learning Chinese classifiers may face some negative transference of mother language, beside that through the analysis of the classifiers questionnaire, find out the causes and patterns of the errors they made. As the preliminary analysis shows, Vietnamese students while learning Chinese classifiers made some errors such as: overuse classifier ‘ge’(个); misuse the other classifiers ‘*yi zhang ri ji’(yi pian ri ji), ‘*yi zuo fang zi’(yi jian fang zi), ‘*si zhang jin pai’(si mei jin pai); homonym words ‘dui, shuang, fu, tao’ (对、双、副、套), ‘ke, li’ (颗、粒).

Keywords: acquisition, classifiers, negative transfer, Vietnamse learners

Procedia PDF Downloads 452
216 Applying the View of Cognitive Linguistics on Teaching and Learning English at UFLS - UDN

Authors: Tran Thi Thuy Oanh, Nguyen Ngoc Bao Tran

Abstract:

In the view of Cognitive Linguistics (CL), knowledge and experience of things and events are used by human beings in expressing concepts, especially in their daily life. The human conceptual system is considered to be fundamentally metaphorical in nature. It is also said that the way we think, what we experience, and what we do everyday is very much a matter of language. In fact, language is an integral factor of cognition in that CL is a family of broadly compatible theoretical approaches sharing the fundamental assumption. The relationship between language and thought, of course, has been addressed by many scholars. CL, however, strongly emphasizes specific features of this relation. By experiencing, we receive knowledge of lives. The partial things are ideal domains, we make use of all aspects of this domain in metaphorically understanding abstract targets. The paper refered to applying this theory on pragmatics lessons for major English students at University of Foreign Language Studies - The University of Da Nang, Viet Nam. We conducted the study with two third – year students groups studying English pragmatics lessons. To clarify this study, the data from these two classes were collected for analyzing linguistic perspectives in the view of CL and traditional concepts. Descriptive, analytic, synthetic, comparative, and contrastive methods were employed to analyze data from 50 students undergoing English pragmatics lessons. The two groups were taught how to transfer the meanings of expressions in daily life with the view of CL and one group used the traditional view for that. The research indicated that both ways had a significant influence on students' English translating and interpreting abilities. However, the traditional way had little effect on students' understanding, but the CL view had a considerable impact. The study compared CL and traditional teaching approaches to identify benefits and challenges associated with incorporating CL into the curriculum. It seeks to extend CL concepts by analyzing metaphorical expressions in daily conversations, offering insights into how CL can enhance language learning. The findings shed light on the effectiveness of applying CL in teaching and learning English pragmatics. They highlight the advantages of using metaphorical expressions from daily life to facilitate understanding and explore how CL can enhance cognitive processes in language learning in general and teaching English pragmatics to third-year students at the UFLS - UDN, Vietnam in personal. The study contributes to the theoretical understanding of the relationship between language, cognition, and learning. By emphasizing the metaphorical nature of human conceptual systems, it offers insights into how CL can enrich language teaching practices and enhance students' comprehension of abstract concepts.

Keywords: cognitive linguisitcs, lakoff and johnson, pragmatics, UFLS

Procedia PDF Downloads 36
215 The Value of Computerized Corpora in EFL Textbook Design: The Case of Modal Verbs

Authors: Lexi Li

Abstract:

This study aims to contribute to the field of how computer technology can be exploited to enhance EFL textbook design. Specifically, the study demonstrates how computerized native and learner corpora can be used to enhance modal verb treatment in EFL textbooks. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because the pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the “secondary school” section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was compared with the textbook corpus in terms of the use (distributional features, semantic functions, and co-occurring constructions) in order to examine the degree of influence of the textbook on learners’ use of modal verbs. Moreover, the learner corpus was analyzed for the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The results indicate discrepancies between the textbook presentation of modal verbs and authentic modal use in natural discourse in terms of distributions of frequencies, semantic functions, and co-occurring structures. Furthermore, there are consistent patterns of use between the learner corpus and the textbook corpus with respect to the three above-mentioned aspects, except could, will and must, partially confirming the correlation between the frequency effects and L2 grammar acquisition. Further analysis reveals that the exceptions are caused by both positive and negative L1 transfer, indicating that the frequency effects can be intercepted by L1 interference. Besides, error analysis revealed that could, would, should and must are the most difficult for Chinese learners due to both inter-linguistic and intra-linguistic interference. The discrepancies between the textbook corpus and the native corpus point to a need to adjust the presentation of modal verbs in the textbooks in terms of frequencies, different meanings, and verb-phrase structures. Along with the adjustment of modal verb treatment based on authentic use, it is important for textbook writers to take into consideration the L1 interference as well as learners’ difficulties in their use of modal verbs. The present study is a methodological showcase of the combination both native and learner corpora in the enhancement of EFL textbook language authenticity and appropriateness for learners.

Keywords: EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 124
214 Retrofitting Insulation to Historic Masonry Buildings: Improving Thermal Performance and Maintaining Moisture Movement to Minimize Condensation Risk

Authors: Moses Jenkins

Abstract:

Much of the focus when improving energy efficiency in buildings fall on the raising of standards within new build dwellings. However, as a significant proportion of the building stock across Europe is of historic or traditional construction, there is also a pressing need to improve the thermal performance of structures of this sort. On average, around twenty percent of buildings across Europe are built of historic masonry construction. In order to meet carbon reduction targets, these buildings will require to be retrofitted with insulation to improve their thermal performance. At the same time, there is also a need to balance this with maintaining the ability of historic masonry construction to allow moisture movement through building fabric to take place. This moisture transfer, often referred to as 'breathable construction', is critical to the success, or otherwise, of retrofit projects. The significance of this paper is to demonstrate that substantial thermal improvements can be made to historic buildings whilst avoiding damage to building fabric through surface or interstitial condensation. The paper will analyze the results of a wide range of retrofit measures installed to twenty buildings as part of Historic Environment Scotland's technical research program. This program has been active for fourteen years and has seen interventions across a wide range of building types, using over thirty different methods and materials to improve the thermal performance of historic buildings. The first part of the paper will present the range of interventions which have been made. This includes insulating mass masonry walls both internally and externally, warm and cold roof insulation and improvements to floors. The second part of the paper will present the results of monitoring work which has taken place to these buildings after being retrofitted. This will be in terms of both thermal improvement, expressed as a U-value as defined in BS EN ISO 7345:1987, and also, crucially, will present the results of moisture monitoring both on the surface of masonry walls the following retrofit and also within the masonry itself. The aim of this moisture monitoring is to establish if there are any problems with interstitial condensation. This monitoring utilizes Interstitial Hygrothermal Gradient Monitoring (IHGM) and similar methods to establish relative humidity on the surface of and within the masonry. The results of the testing are clear and significant for retrofit projects across Europe. Where a building is of historic construction the use of materials for wall, roof and floor insulation which are permeable to moisture vapor provides both significant thermal improvements (achieving a u-value as low as 0.2 Wm²K) whilst avoiding problems of both surface and intestinal condensation. As the evidence which will be presented in the paper comes from monitoring work in buildings rather than theoretical modeling, there are many important lessons which can be learned and which can inform retrofit projects to historic buildings throughout Europe.

Keywords: insulation, condensation, masonry, historic

Procedia PDF Downloads 172
213 Modification of Magneto-Transport Properties of Ferrimagnetic Mn₄N Thin Films by Ni Substitution and Their Magnetic Compensation

Authors: Taro Komori, Toshiki Gushi, Akihito Anzai, Taku Hirose, Kaoru Toko, Shinji Isogami, Takashi Suemasu

Abstract:

Ferrimagnetic antiperovskite Mn₄₋ₓNiₓN thin film exhibits both small saturation magnetization and rather large perpendicular magnetic anisotropy (PMA) when x is small. Both of them are suitable features for application to current induced domain wall motion devices using spin transfer torque (STT). In this work, we successfully grew antiperovskite 30-nm-thick Mn₄₋ₓNiₓN epitaxial thin films on MgO(001) and STO(001) substrates by MBE in order to investigate their crystalline qualities and magnetic and magneto-transport properties. Crystalline qualities were investigated by X-ray diffraction (XRD). The magnetic properties were measured by vibrating sample magnetometer (VSM) at room temperature. Anomalous Hall effect was measured by physical properties measurement system. Both measurements were performed at room temperature. Temperature dependence of magnetization was measured by VSM-Superconducting quantum interference device. XRD patterns indicate epitaxial growth of Mn₄₋ₓNiₓN thin films on both substrates, ones on STO(001) especially have higher c-axis orientation thanks to greater lattice matching. According to VSM measurement, PMA was observed in Mn₄₋ₓNiₓN on MgO(001) when x ≤ 0.25 and on STO(001) when x ≤ 0.5, and MS decreased drastically with x. For example, MS of Mn₃.₉Ni₀.₁N on STO(001) was 47.4 emu/cm³. From the anomalous Hall resistivity (ρAH) of Mn₄₋ₓNiₓN thin films on STO(001) with the magnetic field perpendicular to the plane, we found out Mr/MS was about 1 when x ≤ 0.25, which suggests large magnetic domains in samples and suitable features for DW motion device application. In contrast, such square curves were not observed for Mn₄₋ₓNiₓN on MgO(001), which we attribute to difference in lattice matching. Furthermore, it’s notable that although the sign of ρAH was negative when x = 0 and 0.1, it reversed positive when x = 0.25 and 0.5. The similar reversal occurred for temperature dependence of magnetization. The magnetization of Mn₄₋ₓNiₓN on STO(001) increases with decreasing temperature when x = 0 and 0.1, while it decreases when x = 0.25. We considered that these reversals were caused by magnetic compensation which occurred in Mn₄₋ₓNiₓN between x = 0.1 and 0.25. We expect Mn atoms of Mn₄₋ₓNiₓN crystal have larger magnetic moments than Ni atoms do. The temperature dependence stated above can be explained if we assume that Ni atoms preferentially occupy the corner sites, and their magnetic moments have different temperature dependence from Mn atoms at the face-centered sites. At the compensation point, Mn₄₋ₓNiₓN is expected to show very efficient STT and ultrafast DW motion with small current density. What’s more, if angular momentum compensation is found, the efficiency will be best optimized. In order to prove the magnetic compensation, X-ray magnetic circular dichroism will be performed. Energy dispersive X-ray spectrometry is a candidate method to analyze the accurate composition ratio of samples.

Keywords: compensation, ferrimagnetism, Mn₄N, PMA

Procedia PDF Downloads 134
212 The Politics of Foreign Direct Investment for Socio-Economic Development in Nigeria: An Assessment of the Fourth Republic Strategies (1999 - 2014)

Authors: Muritala Babatunde Hassan

Abstract:

In the contemporary global political economy, foreign direct investment (FDI) is gaining currency on daily basis. Notably, the end of the Cold War has brought about the dominance of neoliberal ideology with its mantra of private-sector-led economy. As such, nation-states now see FDI attraction as an important element in their approach to national development. Governments and policy makers are preoccupying themselves with unraveling the best strategies to not only attract more FDI but also to attain the desired socio-economic development status. In Nigeria, the perceived development potentials of FDI have brought about aggressive hunt for foreign investors, most especially since transition to civilian rule in May 1999. Series of liberal and market oriented strategies are being adopted not only to attract foreign investors but largely to stimulate private sector participation in the economy. It is on this premise that this study interrogates the politics of FDI attraction for domestic development in Nigeria between 1999 and 2014, with the ultimate aim of examining the nexus between regime type and the ability of a state to attract and benefit from FDI. Building its analysis within the framework of institutional utilitarianism, the study posits that the essential FDI strategies for achieving the greatest happiness for the greatest number of Nigerians are political not economic. Both content analysis and descriptive survey methodology were employed in carrying out the study. Content analysis involves desk review of literatures that culminated in the development of the study’s conceptual and theoretical framework of analysis. The study finds no significant relationship between transition to democracy and FDI inflows in Nigeria, as most of the attracted investments during the period of the study were market and resource seeking as was the case during the military regime, thereby contributing minimally to the socio-economic development of the country. It is also found that the country placed much emphasis on liberalization and incentives for FDI attraction at the neglect of improving the domestic investment environment. Consequently, poor state of infrastructure, weak institutional capability and insecurity were identified as the major factors seriously hindering the success of Nigeria in exploiting FDI for domestic development. Given the reality of the currency of FDI as a vector of economic globalization and that Nigeria is trailing the line of private-sector-led approach to development, it is recommended that emphasis should be placed on those measures aimed at improving the infrastructural facilities, building solid institutional framework, enhancing skill and technological transfer and coordinating FDI promotion activities by different agencies and at different levels of government.

Keywords: foreign capital, politics, socio-economic development, FDI attraction strategies

Procedia PDF Downloads 164
211 An Appraisal of Mitigation and Adaptation Measures under Paris Agreement 2015: Developing Nations' Pie

Authors: Olubisi Friday Oluduro

Abstract:

The Paris Agreement 2015, the result of negotiations under the United Nations Framework Convention on Climate Change (UNFCCC), after Kyoto Protocol expiration, sets a long-term goal of limiting the increase in the global average temperature to well below 2 degrees Celsius above pre-industrial levels, and of pursuing efforts to limiting this temperature increase to 1.5 degrees Celsius. An advancement on the erstwhile Kyoto Protocol which sets commitments to only a limited number of Parties to reduce their greenhouse gas (GHGs) emissions, it includes the goal to increase the ability to adapt to the adverse impacts of climate change and to make finance flows consistent with a pathway towards low GHGs emissions. For it achieve these goals, the Agreement requires all Parties to undertake efforts towards reaching global peaking of GHG emissions as soon as possible and towards achieving a balance between anthropogenic emissions by sources and removals by sinks in the second half of the twenty-first century. In addition to climate change mitigation, the Agreement aims at enhancing adaptive capacity, strengthening resilience and reducing the vulnerability to climate change in different parts of the world. It acknowledges the importance of addressing loss and damage associated with the adverse of climate change. The Agreement also contains comprehensive provisions on support to be provided to developing countries, which includes finance, technology transfer and capacity building. To ensure that such supports and actions are transparent, the Agreement contains a number reporting provisions, requiring parties to choose the efforts and measures that mostly suit them (Nationally Determined Contributions), providing for a mechanism of assessing progress and increasing global ambition over time by a regular global stocktake. Despite the somewhat global look of the Agreement, it has been fraught with manifold limitations threatening its very existential capability to produce any meaningful result. Considering these obvious limitations some of which were the very cause of the failure of its predecessor—the Kyoto Protocol—such as the non-participation of the United States, non-payment of funds into the various coffers for appropriate strategic purposes, among others. These have left the developing countries largely threatened eve the more, being more vulnerable than the developed countries, which are really responsible for the climate change scourge. The paper seeks to examine the mitigation and adaptation measures under the Paris Agreement 2015, appraise the present situation since the Agreement was concluded and ascertain whether the developing countries have been better or worse off since the Agreement was concluded, and examine why and how, while projecting a way forward in the present circumstance. It would conclude with recommendations towards ameliorating the situation.

Keywords: mitigation, adaptation, climate change, Paris agreement 2015, framework

Procedia PDF Downloads 157
210 The Effect of Lead(II) Lone Electron Pair and Non-Covalent Interactions on the Supramolecular Assembly and Fluorescence Properties of Pb(II)-Pyrrole-2-Carboxylato Polymer

Authors: M. Kowalik, J. Masternak, K. Kazimierczuk, O. V. Khavryuchenko, B. Kupcewicz, B. Barszcz

Abstract:

Recently, the growing interest of chemists in metal-organic coordination polymers (MOCPs) is primarily derived from their intriguing structures and potential applications in catalysis, gas storage, molecular sensing, ion exchanges, nonlinear optics, luminescence, etc. Currently, we are devoting considerable effort to finding the proper method of synthesizing new coordination polymers containing S- or N-heteroaromatic carboxylates as linkers and characterizing the obtained Pb(II) compounds according to their structural diversity, luminescence, and thermal properties. The choice of Pb(II) as the central ion of MOCPs was motivated by several reasons mentioned in the literature: i) a large ionic radius allowing for a wide range of coordination numbers, ii) the stereoactivity of the 6s2 lone electron pair leading to a hemidirected or holodirected geometry, iii) a flexible coordination environment, and iv) the possibility to form secondary bonds and unusual non-covalent interactions, such as classic hydrogen bonds and π···π stacking interactions, as well as nonconventional hydrogen bonds and rarely reported tetrel bonds, Pb(lone pair)···π interactions, C–H···Pb agostic-type interactions or hydrogen bonds, and chelate ring stacking interactions. Moreover, the construction of coordination polymers requires the selection of proper ligands acting as linkers, because we are looking for materials exhibiting different network topologies and fluorescence properties, which point to potential applications. The reaction of Pb(NO₃)₂ with 1H-pyrrole-2-carboxylic acid (2prCOOH) leads to the formation of a new four-nuclear Pb(II) polymer, [Pb4(2prCOO)₈(H₂O)]ₙ, which has been characterized by CHN, FT-IR, TG, PL and single-crystal X-ray diffraction methods. In view of the primary Pb–O bonds, Pb1 and Pb2 show hemidirected pentagonal pyramidal geometries, while Pb2 and Pb4 display hemidirected octahedral geometries. The topology of the strongest Pb–O bonds was determined as the (4·8²) fes topology. Taking the secondary Pb–O bonds into account, the coordination number of Pb centres increased, Pb1 exhibited a hemidirected monocapped pentagonal pyramidal geometry, Pb2 and Pb4 exhibited a holodirected tricapped trigonal prismatic geometry, and Pb3 exhibited a holodirected bicapped trigonal prismatic geometry. Moreover, the Pb(II) lone pair stereoactivity was confirmed by DFT calculations. The 2D structure was expanded into 3D by the existence of non-covalent O/C–H···π and Pb···π interactions, which was confirmed by the Hirshfeld surface analysis. The above mentioned interactions improve the rigidity of the structure and facilitate the charge and energy transfer between metal centres, making the polymer a promising luminescent compound.

Keywords: coordination polymers, fluorescence properties, lead(II), lone electron pair stereoactivity, non-covalent interactions

Procedia PDF Downloads 145
209 Ultrasonic Irradiation Synthesis of High-Performance Pd@Copper Nanowires/MultiWalled Carbon Nanotubes-Chitosan Electrocatalyst by Galvanic Replacement toward Ethanol Oxidation in Alkaline Media

Authors: Majid Farsadrouh Rashti, Amir Shafiee Kisomi, Parisa Jahani

Abstract:

The direct ethanol fuel cells (DEFCs) are contemplated as a promising energy source because, In addition to being used in portable electronic devices, it is also used for electric vehicles. The synthesis of bimetallic nanostructures due to their novel optical, catalytic and electronic characteristic which is precisely in contrast to their monometallic counterparts is attracting extensive attention. Galvanic replacement (sometimes is named to as cementation or immersion plating) is an uncomplicated and effective technique for making nanostructures (such as core-shell) of different metals, semiconductors, and their application in DEFCs. The replacement of galvanic does not need any external power supply compared to electrodeposition. In addition, it is different from electroless deposition because there is no need for a reducing agent to replace galvanizing. In this paper, a fast method for the palladium (Pd) wire nanostructures synthesis with the great surface area through galvanic replacement reaction utilizing copper nanowires (CuNWS) as a template by the assistance of ultrasound under room temperature condition is proposed. To evaluate the morphology and composition of Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan, emission scanning electron microscopy, energy dispersive X-ray spectroscopy were applied. In order to measure the phase structure of the electrocatalysts were performed via room temperature X-ray powder diffraction (XRD) applying an X-ray diffractometer. Various electrochemical techniques including chronoamperometry and cyclic voltammetry were utilized for the electrocatalytic activity of ethanol electrooxidation and durability in basic solution. Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan catalyst demonstrated substantially enhanced performance and long-term stability for ethanol electrooxidation in the basic solution in comparison to commercial Pd/C that demonstrated the potential in utilizing Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan as efficient catalysts towards ethanol oxidation. Noticeably, the Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan presented excellent catalytic activities with a peak current density of 320.73 mAcm² which was 9.5 times more than in comparison to Pd/C (34.2133 mAcm²). Additionally, activation energy thermodynamic and kinetic evaluations revealed that the Pd@ Copper nanowires/MultiWalled Carbon nanotubes-Chitosan catalyst has lower compared to Pd/C which leads to a lower energy barrier and an excellent charge transfer rate towards ethanol oxidation.

Keywords: core-shell structure, electrocatalyst, ethanol oxidation, galvanic replacement reaction

Procedia PDF Downloads 147
208 Investigation of a Single Feedstock Particle during Pyrolysis in Fluidized Bed Reactors via X-Ray Imaging Technique

Authors: Stefano Iannello, Massimiliano Materazzi

Abstract:

Fluidized bed reactor technologies are one of the most valuable pathways for thermochemical conversions of biogenic fuels due to their good operating flexibility. Nevertheless, there are still issues related to the mixing and separation of heterogeneous phases during operation with highly volatile feedstocks, including biomass and waste. At high temperatures, the volatile content of the feedstock is released in the form of the so-called endogenous bubbles, which generally exert a “lift” effect on the particle itself by dragging it up to the bed surface. Such phenomenon leads to high release of volatile matter into the freeboard and limited mass and heat transfer with particles of the bed inventory. The aim of this work is to get a better understanding of the behaviour of a single reacting particle in a hot fluidized bed reactor during the devolatilization stage. The analysis has been undertaken at different fluidization regimes and temperatures to closely mirror the operating conditions of waste-to-energy processes. Beechwood and polypropylene particles were used to resemble the biomass and plastic fractions present in waste materials, respectively. The non-invasive X-ray technique was coupled to particle tracking algorithms to characterize the motion of a single feedstock particle during the devolatilization with high resolution. A high-energy X-ray beam passes through the vessel where absorption occurs, depending on the distribution and amount of solids and fluids along the beam path. A high-speed video camera is synchronised to the beam and provides frame-by-frame imaging of the flow patterns of fluids and solids within the fluidized bed up to 72 fps (frames per second). A comprehensive mathematical model has been developed in order to validate the experimental results. Beech wood and polypropylene particles have shown a very different dynamic behaviour during the pyrolysis stage. When the feedstock is fed from the bottom, the plastic material tends to spend more time within the bed than the biomass. This behaviour can be attributed to the presence of the endogenous bubbles, which drag effect is more pronounced during the devolatilization of biomass, resulting in a lower residence time of the particle within the bed. At the typical operating temperatures of thermochemical conversions, the synthetic polymer softens and melts, and the bed particles attach on its outer surface, generating a wet plastic-sand agglomerate. Consequently, this additional layer of sand may hinder the rapid evolution of volatiles in the form of endogenous bubbles, and therefore the establishment of a poor drag effect acting on the feedstock itself. Information about the mixing and segregation of solid feedstock is of prime importance for the design and development of more efficient industrial-scale operations.

Keywords: fluidized bed, pyrolysis, waste feedstock, X-ray

Procedia PDF Downloads 172
207 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center

Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael

Abstract:

Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.

Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency

Procedia PDF Downloads 33
206 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 126